Risk of (gambler’s) ruin

Suppose that you start with an initial bankroll of m dollars, and repeatedly make a wager that pays $1 with probability p, and loses $1 with probability 1-p.  What is the risk of ruin, i.e., the probability that you will eventually go broke?

This is the so-called gambler’s ruin problem.  It is a relatively common exercise to show that if p \leq 1/2, the probability of ruin is 1, and if p > 1/2, then the probability is

(\frac{1-p}{p})^m

But what if the wager is not just win-or-lose a dollar, but is instead specified by an arbitrary probability distribution of outcomes?  For example, suppose that at each iteration, we may win any of (-2,-1,0,+1,+2) units, with respective probabilities (1/15,2/15,3/15,4/15,5/15).  The purpose of this post is to capture my notes on some seemingly less well-known results in this more general case.

(The application to my current study of blackjack betting is clear: we have shown that, at least for a shoe game, even if we play perfectly, we are still going to lose if we don’t vary our bet.  We can increase our win rate by betting more in favorable situations… but a natural constraint is to limit our risk of ruin, or probability of going broke.)

Schlesinger (see Reference (2) below) gives the following formula for risk of ruin, due to “George C., published on p. 8 of ‘How to Win $1 Million Playing Casino Blackjack'”:

(\frac{1 - \frac{\mu}{\sigma}}{1 + \frac{\mu}{\sigma}})^\frac{m}{\sigma}

where \mu and \sigma are the mean and standard deviation, respectively, of the outcome of each round (or hourly winnings).  It is worth emphasizing, since it was unclear to me from the text, that this formula is an approximation, albeit a pretty good one.  The derivation is not given, but the approach is simple to describe: normalize the units of both bankroll and outcome of rounds to have unit variance (i.e., divide everything by \sigma), then use the standard two-outcome ruin probability formula above with win probability p chosen to reflect the appropriate expected value of the round, i.e., p - (1-p) = \mu / \sigma.

The unstated assumption is that 0 < \mu < \sigma (note that ruin is guaranteed if \mu < 0, or if \mu = 0 and \sigma > 0), and that accuracy of the approximation depends on \mu \ll \sigma \ll m, which is fortunately generally the case in blackjack.

There is an exact formula for risk of ruin, at least as long as outcomes of each round are bounded and integral.  In Reference (1) below, Katriel describes a formula involving the roots inside the complex unit disk of the equation

\sum p_k z^k = 1

where p_k is the probability of winning k units in each round.  Execution time and numeric stability make effective implementation tricky.

Finally, just to have some data to go along with the equations, following is an example of applying these ideas to analysis of optimal betting in blackjack.  Considering the same rules and setup as in the most recent posts (6 decks, S17, DOA, DAS, SPL1, no surrender, 75% penetration), let’s evaluate all possible betting ramps with a 1-16 spread through a maximum (floored) true count of 10, for each of five different betting and playing strategies, ranging from simplest to most complex:

  1. Fixed basic “full-shoe” total-dependent zero-memory strategy (TDZ), using Hi-Lo true count for betting only.
  2. Hi-Lo with the Illustrious 18 indices.
  3. Hi-Lo with full indices.
  4. Hi-Opt II with full indices and ace side count.
  5. “Optimal” betting and playing strategy, where playing strategy is CDZ- optimized for each pre-deal depleted shoe, and betting strategy is ramped according to the corresponding exact pre-deal expected return.

Then assuming common “standard” values of $10,000 initial bankroll, a $10 minimum bet, and 100 hands per hour, the following figure shows the achievable win rate ($ per hour) and corresponding risk of ruin for each possible strategy and betting ramp:

Win rate vs. risk of ruin for various betting and playing strategies.

Win rate vs. risk of ruin for various betting and playing strategies.

There are plenty of interesting things to note and investigate here.  The idea is that we can pick a maximum acceptable risk of ruin– such as the red line in the figure, indicating the standard Kelly-derived value of 1/e^2, or about 13.5%– and find the betting ramp that maximizes win rate without exceeding that risk of ruin.  Those best win rates for this particular setup are:

  1. Not achievable for TDZ (see below).
  2. $20.16/hr for Hi-Lo I18.
  3. $21.18/hr for Hi-Lo full.
  4. $26.51/hr for Hi-Opt II.
  5. $33.09/hr for optimal play.

Fixed basic TDZ strategy, shown in purple, just isn’t good enough; that is, there is no betting ramp with a risk of ruin smaller than about 15%.  And some betting ramps, even with the 1-16 spread constraint, still yield a negative overall expected return, resulting in the “tail” at P(ruin)=1.  (But that’s using Hi-Lo true count as the “input” to the ramp; it is possible that “perfect” betting using exact pre-deal expected return could do better.)

References:

  1. Katriel, Guy, Gambler’s ruin probability – a general formula, arXiv:1209.4203v4 [math.PR], 2 July 2013
  2. Schlesinger, Don, Blackjack Attack: Playing the Pros’ Way, 3rd ed. Las Vegas: RGE Publishing, Ltd., 2005
Posted in Uncategorized | Leave a comment

A harder birthday problem

It is a well-known non-intuitive result that in a group of n=23 people– conveniently the size of a classroom of students– the probability is at least 1/2 that k=2 or more of them share a birthday.  This is a nice problem for several reasons:

  1. Its solution involves looking at the problem in a non-obvious way, in this case by considering the complementary event that all birthdays are distinct.
  2. Once the approach in (1) is understood, computing the answer is relatively easy: the probability is 1-(365)_{n}/365^n, where (x)_{n} is the falling factorial.
  3. The answer is surprising.  When I ask students, “How many people are needed for the probability of a shared birthday to exceed 1/2?”, guesses as high as 180 are common.
  4. Picking on the not-quite-realistic assumption of all d=365 birthdays being equally likely actually helps; that is, with a non-uniform distribution of birthdays, the probability of coincidence is higher.

But what about larger k?  For example, suppose that while surveying your students’ birthdays in preparation for this problem, you find that three of them share a birthday?  What is the probability of this happening?

The Wikipedia page on the birthday problem doesn’t address this generalization at all.  There are several blog and forum posts addressing the k=3 case specifically, by grouping the prohibited cases according to the number of pairs of people with shared birthdays.  This is generalized to arbitrary k \geq 2 on the Wolfram MathWorld page; the resulting recurrence relation is pretty complex, but it’s a nice exercise to prove that it works.

Probability that at least (2,3,4,5) people share a birthday, vs. group size.

Probability that at least (2,3,4,5) people share a birthday, vs. group size.

The motivation for this post is to describe what I think is a relatively simpler solution, for arbitrary k, including Python source code to perform the calculation.  Let’s fix the number of equally likely possible birthdays d=365, and the desired number k of people sharing a birthday, and define the function

G(x)=(1+x+\frac{x^2}{2} \ldots + \frac{x^{k-1}}{(k-1)!})^d

Then G(x) is the exponential generating function for the number of “prohibited” assignments of birthdays to n people where no more than k-1 share a birthday.  That is, the number of such prohibited assignments is n! times the coefficient of x^n in G(x).

(When working through why this works, it’s interesting how often it can be helpful to transform the problem into a different context.  For example, in this case, we are also counting the number of length-n strings over an alphabet of d characters, where no character appears more than k-1 times.)

The rest of the calculation follows in the usual manner: divide by the total number of possible assignments d^n to get the complementary probability, then subtract from 1.  The following Python code performs this calculation, either exactly– using the fractions module, which can take a while– or in double precision, which is much faster.

import numpy as np
from numpy.polynomial import polynomial as P
import fractions
import operator

def p(k, n, d=365, exact=False):
    f = fractions.Fraction if exact else operator.truediv
    q = P.polypow(np.array(
        [f(1, np.math.factorial(j)) for j in range(k)], dtype=object),
        d)[n]
    for j in range(1, n + 1):
        q = q * f(j, d)
    return 1 - q

 

Posted in Uncategorized | 2 Comments

Strong induction

What is the best way to explain induction to a student?  That is, given a true-or-false statement P(n) involving a natural number n \geq 0, we would like to prove that the statement is true for all such n \geq 0.  How does one prove such a claim, when it seems to require checking an infinite number of cases?  My motivation for this thinking-out-loud post involves a particular form of induction whose presentation in some textbooks may cause students some confusion (or at least, it did cause confusion in at least one student).

Often the first exposure to induction is the so-called “weak” form: to show that P(n) is true for all n \geq 0, it suffices to prove the following two claims:

  • Base step: P(0) is true.
  • Induction step: For all n \geq 0, if P(n) is true, then P(n+1) is true.

That is,

(P(0) \land \forall n \geq 0 P(n) \Rightarrow P(n+1)) \Rightarrow \forall n \geq 0 P(n)

So far, so good.  Although this is technically all that we need, that this is referred to as “weak” induction suggests that there is a stronger form.  There is… sort of.  In his Applied Combinatorics (see Reference 1 below), Tucker skips weak induction altogether, presenting induction in the following “strong” form:

  • Base step: P(0) is true.
  • Induction step: If P(0), P(1), P(2), \ldots, P(n-1) are true, then P(n) is true.

The idea is that the induction hypothesis is stronger, which can make the resulting proof simpler; in the process of proving P(n) in the induction step, we have at our disposal not only the assumed truth of P(n-1), but also all of the other previous statements.  (I say “sort of,” and wrap “weak” and “strong” in quotes, because the two forms are actually equivalent.  Strong induction isn’t really any stronger in the sense of more “proving power”; statements provable using strong induction are also provable by weak induction, and vice versa.)

Notice that the induction step above, quoted directly from Tucker, lacks some rigor in that there is no explicit universal quantifier for n.  That is, for what values of n do we need to demonstrate the induction step?  This is important, because it turns out that if we are more careful, we can economize a bit and demonstrate that P(n) is true for all n \geq 0 by proving the single more compact claim:

  • Strong induction “single-step”: For all n \geq 0, if P(k) is true for all k<n, then P(n) is also true.

That is,

(\forall n \geq 0 (\forall k<n P(k)) \Rightarrow P(n)) \Rightarrow \forall n \geq 0 P(n)

Velleman (Reference 2) takes this presentation approach, which certainly has elegance going for it.  But he then goes on to “note that no base case is necessary in a proof by strong induction [my emphasis].”  This is where I think things can get confusing.  It’s certainly true that, once we have demonstrated the implication in the single-step version above, then we know that P(0) is true.  As Velleman explains, “plugging in 0 for n, we can conclude that (\forall k<0 P(k)) \Rightarrow P(0).  But because there are no natural numbers smaller than 0, the statement \forall k<0 P(k) is vacuously true.  Therefore, by modus ponens, P(0) is true.  (This explains why the base case doesn’t have to be checked separately in a proof by strong induction [my emphasis]; the base case P(0) actually follows from the modified form of the induction step used in strong induction.)”

There is nothing strictly incorrect here.  What may be misleading, though, is the repeated assurance that no special attention is required for treatment of the base case.  When writing the actual proof of the implication in the single-step version of strong induction, the argument may “look different” for n=0 than it does for n>0.  I think Wikipedia actually does the most admirable job of explaining this wrinkle: “Sometimes the same argument applies for n=0 and n>0, making the proof simpler and more elegant.  In this method it is, however, vital to ensure that the proof of P(n) does not implicitly assume that n>0, e.g. by saying “choose an arbitrary k<n” or assuming that a set of n elements has an element [my emphasis added].”

From a teaching perspective, what are good examples of induction proofs that highlight these issues?  This is a good question.  When discussing strong induction, the most common example seems to be the existence part of the fundamental theorem of arithmetic: every integer greater than 1 is the product of one or more primes.  Here we really can make just one argument, so to speak, without treating the first prime number 2 as a special base case (interestingly, Wikipedia does address it separately).

In a course on graph theory, I think another nice example is the proof that any round-robin tournament contains a Hamiltonian path.  Again, the single-step version of a strong induction argument works here as well (although we have to be a little more careful about what happens when some constructed subsets of vertices end up being empty).

Finally, another common example where strong induction is useful, but special treatment of base case(s) is required, involves postage stamps: show that it is possible to use 4- and 5-cent stamps to form every amount of postage of 12 cents or more.

References:

  1. Tucker, Alan, Applied Combinatorics, 6th ed. New York: Wiley and Sons, 2012
  2. Velleman, Daniel J., How to Prove It: A Structured Approach. New York: Cambridge University Press, 2006
Posted in Uncategorized | Leave a comment

How many dice?

The figure below shows an “unfolded” view of a typical 6-sided die (d6), used everywhere from board games to casinos:

An "unfolded" view of a standard d6.

An “unfolded” view of a standard d6.

Each of the six faces is uniquely labeled with the integers 1 through 6.  But that’s not all; note that the values on opposite faces always sum to 7.  This is a standard arrangement applied more generally to other types of dice as well.  My set of Platonic solid dice– with n sides for n \in {4,6,8,12,20}— all have this same property.  Let us call such an n-sided die standard if all opposite faces sum to n+1.

Problem 1: How many different standard 6-sided dice are there?  That is, in how many ways can we label the faces with distinct integers 1 through 6, with opposite faces summing to 7?

Problem 2: What if we relax the standard (constant opposite sum) property?  That is, in how many ways can we label the faces with distinct integers 1 through 6, with no other restrictions?

Problem 3: Same as Problem 2, but for the other Platonic solids (d4, d8, d12, d20)?

I think these are great examples of problems for students that straddle whatever line there may be between mathematics and computer science.  Problems 1 and 2 can be solved as-is “by hand.”  (And they are interesting in part because the answers are perhaps surprisingly small numbers.)  But the usual mathematical machinery involved just counts the number of dice in each case; it’s an interesting extension as a programming problem to actually enumerate (i.e., list) them, display visual representations of them, etc.

Problem 3 is more challenging; the machinery is the same, but the larger numbers involved require some amount of automation in the housekeeping.

Now for what I think is a really hard problem motivating this post (Edit: although after a response from a reader in another forum with a very elegant solution, perhaps this is not as difficult as I thought it might be!):

Problem 4: Same as Problem 1, but for the other Platonic solids– that is, how many different standard (d4, d8, d12, d20) are there, with constant opposite sums?

 

Posted in Uncategorized | Leave a comment

Distribution and variance in blackjack (Part 2)

Introduction

This is a follow-up to the previous post describing the recently-developed algorithm to efficiently compute not just the expected value of a round of blackjack, but the entire probability distribution– and thus the variance– allowing the analysis of betting strategies.  This time, I want to look at some actual data.  But first, a digression motivated by some interesting questions about this analysis:

Why combinatorics?

I spend much of my time in my day job on modeling and simulation, where the usual objective is to estimate the expected value of some random variable, which is a function of the pseudo-randomly generated outcome of the simulation.  For example, what is the probability that the system of interest performs as desired (e.g., detects the target, tracks the target, destroys the target, etc.)?  In that case, we want the expected value of a {0,1} indicator random variable.  The usual approach is to run a simulation of that system many times (sometimes for horrifyingly small values of “many”), recording the number of successes and failures; the fraction of runs that were successful is a point estimate of the probability of success.

Usually, we do things this way because it’s both easier and faster than attempting to compute the exact desired expected value.  It’s easier because the simulation code is relatively simple to write and reason about, since it corresponds closely with our natural understanding of the process being simulated.  And it’s faster because exact computation involves integration over the probability distributions of all of the underlying sources of randomness in the simulation, which usually interact with each other in a prohibitively complex way.  We can afford to wait on the many simulation runs to achieve our desired estimation accuracy, because the exact computation, even if we could write the code to perform it, might take an astronomically longer time to execute.

But sometimes that integration complexity is manageable, and combinatorics is often the tool for managing that complexity.  This is true in the case of blackjack: there are simulations designed to estimate various metrics of the game, and there are so-called “combinatorial analyzers” (CAs) designed to compute some of these same metrics exactly.

My point in this rant is that these two approaches– simulation and CA– are not mutually exclusive, and in some cases it can be useful to combine them.  The underlying objective is the same for both: we want a sufficiently accurate estimate of metrics of interest, where “sufficiently accurate” depends on the particular metrics we are talking about, and on the particular questions we are trying to answer.  I will try to demonstrate what I mean by this in the following sections.

The setup

Using the blackjack rules mentioned in the previous post (6 decks, S17, DOA, DAS, SPL1, no surrender), let’s simulate play through 100,000 shoes, each dealt to 75% penetration, heads-up against the dealer.  Actually, let’s do this twice, once at each of the two endpoints of reasonable and feasible strategy complexity:

For the first simulation, the player uses fixed “basic” total-dependent zero-memory (TDZ) strategy, where:

  • Zero-memory means that strategy decisions are determined solely by the player’s current hand and dealer up card;
  • Total-dependent means that strategy decisions are dependent only on the player’s current hand total (hard or soft count), not on the composition of that total; and
  • Fixed means that the TDZ strategy is computed once, up-front, for a full 6-deck shoe, but then that same strategy is applied for every round throughout each shoe as it is depleted.

In other words, the basic TDZ player represents the “minimum effort” in terms of playing strategy complexity.  Then, at the other extreme, let’s do what is, as far as I know, the best that can be achieved by a player today, assuming that he (illegally) brings a laptop to the table: play through 100,000 shoes again, but this time, using “optimal” composition-dependent zero-memory (CDZ-) strategy, where:

  • Composition-dependent means that strategy is allowed to vary with the composition of the player’s current hand; and
  • “Optimal” means that the CDZ- strategy is re-computed prior to every round throughout each shoe.

(The qualification of “optimal” is due to the minus sign in the CDZ- notation, which reflects the conjecture, now proven, that this strategy does not always yield the maximum possible overall expected value (EV) among all composition-dependent zero-memory strategies.  The subtlety has to do with pair-splitting, and is worth a post in itself.  So even without fully optimal “post-split” playing strategy, we are trading some EV for feasible computation time… but it’s worth noting that, at least for these rule variations, that average cost in EV is approximately 0.0002% of unit wager.)

The cut-card effect

Now for the “combining simulation and CA”: for each simulated round of play, instead of just playing out the hand (or hands, if splitting a pair) and recording the net win/loss outcome, let’s also compute and record the exact pre-round probability distribution of the overall outcome.  The result is roughly 4.3 million data points for each of the two playing strategies, corresponding to roughly 43 rounds per 100,000 shoes.

The following figure shows one interesting view of the resulting data: for each playing strategy, what is the distribution of expected return as a function of the number of rounds dealt into the shoe?  The gray curves indicate 5% quantiles, ranging from minimum to maximum EV, and the red and blue curves indicate the mean for basic TDZ and optimal CDZ- strategy, respectively.

Expected value of a unit wager vs. round, for fixed basic TDZ strategy (left) and CDZ- strategy optimized for the current depleted shoe (right).

Distribution of expected value of a unit wager vs. round, for fixed basic TDZ strategy (left) and CDZ- strategy optimized for the current depleted shoe (right).  Five percent quantiles, from minimum to maximum, are shown in gray, with the mean in red/blue.

There are several interesting things happening here.  Let’s zoom in on just the red and blue curves indicating the mean EV per round:

Estimated expected value of a unit wager vs. round, for fixed basic TDZ strategy (red) and optimal CDZ- strategy (blue).

Estimated expected value of a unit wager vs. round, for fixed basic TDZ strategy (red) and optimal CDZ- strategy (blue).

First, the red curve is nearly constant through most of the initial rounds of the shoe.  Actually, for every round that we are guaranteed to reach (i.e., not running out of cards before reaching the “cut-card” at 75% penetration), the true EV can be shown to be exactly constant.

Second, that extreme dip near the end of the shoe is illustration of the cut-card effect: if we manage to play an abnormally large number of rounds in a shoe, it’s because there were relatively few cards dealt per round, which means those rounds were rich in tens, which means that the remainder of the shoe is poor in tens, yielding a significantly lower EV.

Keep in mind that we can see this effect with less than 10 million total simulated rounds, with per-round EV estimates obtained from samples of at most 100,000 depleted shoe subsets!  This isn’t so surprising when we consider that each of those sample subsets indicates not just the net outcome of the round, but the exact expected outcome of the round, whose variance is much smaller… most of the time.  For the red TDZ and blue CDZ- strategies shown in the figure, there are actually three curves each, showing not just the estimated mean but also the (plus/minus) estimated standard deviation.  For the later rounds where the cut-card effect means that we have fewer than 100,000 sampled rounds, that standard deviation becomes large enough that we can see the uncertainty more clearly.

Variance

So far, none of this is really new; we have been able to efficiently compute basic and optimal strategy expected return for some time.  The interesting new data is the corresponding pre-deal variance (derived from the exact probability distribution).  The following figure shows an initial look at this, as a scatter plot of variance vs. EV for each round, with an overlaid smoothed histogram to more clearly show the density:

Variance vs. expected value of a unit wager, for fixed basic TDZ strategy (left) and optimal CDZ- strategy (right).

Variance vs. expected value of a unit wager, for fixed basic TDZ strategy (left) and optimal CDZ- strategy (right).

It seems interesting that the correlation sense is roughly reversed between basic and optimal strategy; that is, for the basic strategy player, higher EV generally means lower variance, while for the optimal strategy player with the laptop, the reverse is true.  I can’t say I anticipated this behavior, and upon first and second thoughts I still don’t have an intuitive explanation for what’s going on.

 

Posted in Uncategorized | 1 Comment

Distribution and variance in blackjack

Introduction

Analysis of casino blackjack has been a recurring passion project of mine for nearly two decades now.  I have been back at it again for the last couple of months, this time working on computing the distribution (and thus also the variance) of the outcome of a round.  This post summarizes the initial results of that effort.  For reference up front, all updated source code is available here, with pre-compiled binaries for Windows in the usual location here.

Up to this point, the focus has been on accurate and efficient calculation of exact expected value (EV) of the outcome of a round, for arbitrary shoe compositions and playing strategies, most recently including index play using a specified card counting system.  This is sufficient for evaluating playing efficiency: that is, how close to “perfect play” (in the EV-maximizing sense) can be achieved solely by varying playing strategy?

But advantage players do not just vary playing strategy, they also– and arguably more importantly– have a betting strategy, wagering more when they perceive the shoe to be favorable, wagering less or not at all when the shoe is unfavorable.  So a natural next step is to evaluate such betting strategies… but to do so requires not just expected value, but also the variance of the outcome of each round.  This post describes the software updates for computing not just the variance, but the entire probability distribution of possible outcomes of a round of blackjack.

Rules of the game

For consistency in all discussion, examples, and figures, I will assume the following setup: 6 decks dealt to 75% penetration, dealer stands on soft 17, doubling down is allowed on any two cards including after splitting pairs, no surrender… and pairs may be split only once (i.e., to a maximum of two hands).  Note that this is almost exactly the same setup as the earlier analysis of card counting playing efficiency, with the exception of not re-splitting pairs: although we can efficiently compute exact expected values even when re-splitting pairs is allowed, computing the distribution in that case appears to be much harder.

Specifying (vs. optimizing) strategy

The distribution of outcomes of a round depends on not only the rule variations as described above, but also the playing strategy.  This is specified using the same interface as the original interface for computing expected value, which looks like this:

virtual int BJStrategy::getOption(const BJHand & hand, int upCard,
                    bool doubleDown, bool split, bool surrender);

The method parameters specify the “zero memory” information available to the player: the cards in the current hand, the dealer’s up card, and whether doubling down, splitting, or surrender is allowed in the current situation.  The return value indicates the action the player should take: whether to stand, hit, double down, split, or surrender.  For example, the following implementation realizes the simple– but poorly performing– “mimic the dealer” strategy:

virtual int getOption(const BJHand & hand, int upCard,
        bool doubleDown, bool split, bool surrender) {
    if (hand.getCount() < 17) {
        return BJ_HIT;
    } else {
        return BJ_STAND;
    }
}

When computing EV, instead of returning an explicit action, we can also return the code BJ_MAX_VALUE, meaning, “Take whatever action maximizes EV in this situation.”  For example, the default base class implementation of BJStrategy::getOption() always returns BJ_MAX_VALUE, meaning, “Compute optimal composition-dependent zero memory (CDZ-) strategy, maximizing overall EV for the current shoe.”

However, handling this special return value requires evaluating all possible subsequent hands that may result (i.e., from hitting, doubling down, splitting, etc.).  When only computing EV, we can do this efficiently, but for computing the distribution, an explicit specification of playing strategy is required.

The figure below shows a comparison of the resulting distribution of outcomes for these two example strategies.

Probability distribution of player outcomes of a unit wager on a single round from a full shoe, using "mimic the dealer" strategy (red) vs. optimal strategy (blue).

Probability distribution of player outcomes of a unit wager on a single round from a full shoe, using “mimic the dealer” strategy (red) vs. optimal strategy (blue).

It is perhaps not obvious from this figure just how much worse “mimic the dealer” performs, yielding a house edge of 5.68% of initial wager; compare this with the house edge of just 0.457% for optimal CDZ- strategy.

Algorithm details

Blackjack is hard (i.e., interesting) because of splitting pairs.  It is manageably hard in the case of expected value, because expected value is linear: that is, if X_1 and X_2 are random variables indicating the outcome of the two “halves” of a split pair, then the expected value of the overall outcome, E[X_1+X_2], is simply the sum of the individual expected values, E[X_1]+E[X_2].  Better yet, when both halves of the split are resolved using the same playing strategy, those individual expected values are equal, so that we need only compute 2E[X_1].  (When pairs may be split and re-split, things get slightly more complicated, but the idea still applies.)

Variance, on the other hand, is not linear– at least when the summands are correlated, which they certainly are in the case of blackjack split hands.  So my first thought was to simply try brute force, recursively visiting all possible resolutions of a round.  Weighting each resulting outcome by its probability of occurrence, we can compute not just the variance, but the entire distribution of the outcome of the round.

This was unacceptably slow, taking about 5 minutes on my laptop to compute the distribution for CDZ- strategy from a full 6-deck shoe.  (Compare this with less than a tenth of a second to compute the CDZ- strategy itself and the corresponding overall EV.)  It was also not terribly accurate: the “direct” computation of overall EV provides a handy source of ground truth against which we can compare the “derived” overall EV using the distribution.  Because computing that distribution involved adding up over half a billion individual possible outcomes of a round, numerical error resulted in loss of nearly half of the digits of double precision.  A simple implementation of Kahan’s summation algorithm cleaned things up.

Two modifications to this initial approach resulted in a roughly 200-fold speedup, so that the current implementation now takes about 2.5 seconds to compute the distribution for the same full 6-deck optimal strategy.  First, the Steiner tree problem of efficiently calculating the probabilities of outcomes of the dealer’s hand can be split up into 10 individual trees, one for each possible up card.  This was an interesting trade-off: when you know you need all possible up cards for all possible player hands– as in the case of computing overall EV– there are savings to be gained by doing the computation for all 10 up cards at once.  But when computing the distribution, some player hands are only ever reached with some dealer up cards, so there is benefit in only doing the computation for those that you need… even if doing them separately actually makes some of that computation redundant.

Second, and most importantly, we don’t have to recursively evaluate every possible resolution of pair split hands.  The key observation is that (1) both halves of the split have the same set of possible hands– viewed as unordered subsets of cards– that may result; and (2) each possible selection of pairs of such resolved split hands may occur many times, but each with the same probability.  So we need only recursively traverse one half of any particular split hand, recording the number of times we visit each resolved hand subset.  Then we traverse the outer product of pairs of such hands, computing the overall outcome and “individual” probability of occurrence, multiplied by the number of orderings of cards that yield that pair of hands.

3:2 Blackjack isn’t always optimal

Finally, I found an interesting, uh, “feature,” in my original implementation of CDZ- strategy calculation.  In early testing of the distribution calculation, I noticed that the original direct computation of overall EV didn’t always agree with the EV derived from the computed distribution.  The problem turned out to be the “special” nature of a player blackjack, that is, drawing an initial hand of ten-ace, which pays 3:2 (when the dealer does not also have blackjack).

There are extreme cases where “standing” on blackjack is not actually optimal: for example, consider a depleted shoe with just a single ace and a bunch of tens.  If you are dealt blackjack, then you can do better than an EV of 1.5 from the blackjack payoff, by instead doubling down, guaranteeing a payoff of 2.0.  Although interesting, so far no big deal; the previous and current implementations both handle this situation correctly.

But now suppose that you are initially dealt a pair of tens, and optimal strategy is to split the pair, and you subsequently draw an ace to one of those split hands.  This hand does not pay 3:2 like a “normal” blackjack… but deciding what “zero memory” action to take may depend on when we correct the expected value of standing on ten-ace to its pre-split 3:2 payoff: in the previous implementation, this was done after the post-split EVs were computed, meaning that the effective strategy might be to double down on ten-ace after a split, but stand with the special 3:2 payoff on an “actual” blackjack.

Since this violates the zero memory constraint of CDZ-, the current updated implementation performs this correction before post-split EVs are computed, so that the ten-ace playing decision will be the same in both cases.  (Note that the strategy calculator also supports the more complex CDP1 and CDP strategies as well, the distinctions of which are for another discussion.)

What I found interesting was just how common this seemingly pathological situation is.  In a simulation of 100,000 shoes played to 75% penetration, roughly 14.5% of them involved depleted shoe compositions where optimal strategy after splitting tens and drawing an ace was to double down instead of stand.  As expected, these situations invariably occurred very close to the cut card, with depleted shoes still overly rich in tens.  (For one specific example, consider the shoe represented by the tuple (6, 6, 1, 5, 8, 2, 5, 9, 2, 34), indicating 6 aces, 6 twos, 34 tens, etc.)

Wrapping up, this post really just captures my notes on the assumptions and implementation details of the computation of the probability distribution of outcomes of a round of blackjack.  The next step is to look at some actual data, where one issue I want to focus on is a comparison of analysis approaches– combinatorial analysis (CA) and Monte Carlo simulation– their relative merits, and in particular, the benefits of combining the two approaches where appropriate.

 

Posted in Uncategorized | 1 Comment

Serializing MATLAB data

Consider the following problem: given a value in the MATLAB programming language, can we serialize it into a sequence of bytes– suitable for, say, storage on disk– in a form that allows easy recovery of the exact original value?

Although I will eventually try to provide an actual solution, the primary motivation for this post is simply to point out some quirks and warts of the MATLAB language that make this problem surprisingly difficult to solve.

“Binary” serialization

Our problem requires a bit of clarification, since there are at least a couple of different reasonable use cases.  First, if we can work with a stream of arbitrary opaque bytes– for example, if we want to send and receive MATLAB data on a TCP socket connection– then there is actually a very simple and robust built-in solution… as long as we’re comfortable with undocumented functionality.  The function b=getByteStreamFromArray(v) converts a value to a uint8 array of bytes, and v=getArrayFromByteStream(b) converts back.  This works on pretty much all types of data I can think of to test, even Java- and user-defined class instances.

Text serialization

But what if we would like something human-readable (and thus potentially human-editable)?  That is, we would like a function similar to Python’s repr, that converts a value to a char string representation, so that eval(repr(v)) “equals” v.  (I say “‘equals'” because even testing such a function is hard to do in MATLAB.  I suppose the built-in function isequaln is the closest approximation to what we’re looking for, but it ignores type information, so that isequaln(int8(5), single(5)), for example.)

Without further ado, following is my attempt at such an implementation, to use as you wish:

function s = repr(v)
%REPR Return string representation of value such that eval(repr(v)) == v.
%
%   Class instances, NaN payloads, and function handle closures are not
%   supported.

    if isstruct(v)
        s = sprintf('cell2struct(%s, %s)', ...
            repr(struct2cell(v)), repr(fieldnames(v)));
    elseif isempty(v)
        sz = size(v);
        if isequal(sz, [0, 0])
            if isa(v, 'double')
                s = '[]';
            elseif ischar(v)
                s = '''''';
            elseif iscell(v)
                s = '{}';
            else
                s = sprintf('%s([])', class(v));
            end
        elseif isa(v, 'double')
            s = sprintf('zeros(%s)', mat2str(sz, 17));
        elseif iscell(v)
            s = sprintf('cell(%s)', mat2str(sz, 17));
        else
            s = sprintf('%s(zeros(%s))', class(v), mat2str(sz, 17));
        end
    elseif ~ismatrix(v)
        nd = ndims(v);
        s = sprintf('cat(%d, %s)', nd, strjoin(cellfun(@repr, ...
            squeeze(num2cell(v, 1:(nd - 1))).', ...
            'UniformOutput', false), ', '));
    elseif isnumeric(v)
        if ~isreal(v)
            s = sprintf('complex(%s, %s)', repr(real(v)), repr(imag(v)));
        elseif isa(v, 'double')
            s = strrep(repr_matrix(@arrayfun, ...
                @(x) regexprep(char(java.lang.Double.toString(x)), ...
                '\.0$', ''), v, '[%s]', '%s'), 'inity', '');
        elseif isfloat(v)
            s = strrep(repr_matrix(@arrayfun, ...
                @(x) regexprep(char(java.lang.Float.toString(x)), ...
                '\.0$', ''), v, '[%s]', 'single(%s)'), 'inity', '');
        elseif isa(v, 'uint64') || isa(v, 'int64')
            t = class(v);
            s = repr_matrix(@arrayfun, ...
                @(x) sprintf('%s(%s)', t, int2str(x)), v, '[%s]', '%s');
        else
            s = mat2str(v, 'class');
        end
    elseif islogical(v) || ischar(v)
        s = mat2str(v);
    elseif iscell(v)
        s = repr_matrix(@cellfun, @repr, v, '%s', '{%s}');
    elseif isa(v, 'function_handle')
        s = sprintf('str2func(''%s'')', func2str(v));
    else
        error('Unsupported type.');
    end
end

function s = repr_matrix(map, repr_scalar, v, format_matrix, format_class)
    s = strjoin(cellfun(@(row) strjoin(row, ', '), ...
        num2cell(map(repr_scalar, v, 'UniformOutput', false), 2).', ...
                                     'UniformOutput', false), '; ');
    if ~isscalar(v)
        s = sprintf(format_matrix, s);
    end
    s = sprintf(format_class, s);
end

That felt like a lot of work… and that’s only supporting the “plain old data” types: struct and cell arrays, function handles, logical and character arrays, and the various floating-point and integer numeric types.  As the help indicates, Java and classdef instances are not supported.  A couple of other cases are only imperfectly handled as well, as we’ll see shortly.

Struct arrays

The code starts with struct arrays.  The tricky issue here is that struct arrays can not only be “empty” in the usual sense of having zero elements, but also– independently of whether they are empty– they can have no fields.  It turns out that the struct constructor, which would work fine for “normal” structures with one or more fields, has limited expressive power when it comes to field-less struct arrays: unless the size is 1×1 or 0x0, some additional concatenation or reshaping is required.  Fortunately, cell2struct handles all of these cases directly.

Multi-dimensional arrays

Next, after handling the tedious cases of empty arrays of various types, the ~ismatrix(v) test handles multi-dimensional arrays– that is, arrays with more than 2 dimensions.  I could have handled this with reshape instead, but I think this recursive concatenation approach does a better job of preserving the “visual shape” of the data.

In the process of testing this, I learned something interesting about multi-dimensional arrays: they can’t have trailing singleton dimensions!  That is, there are 1×1 arrays, and 2×1 arrays, even 1x2x3 and 2x1x3 arrays… but no matter how hard I try, I cannot construct an mxnx1 array, or an mxnxkx1 array, etc.  MATLAB seems to always “squeeze” trailing singleton dimensions automagically.

Numbers

The isnumeric(v) section is what makes this problem almost comically complicated.  There are 10 different numeric types in MATLAB: double and single precision floating point, and signed and unsigned 8-, 16-, 32-, and 64-bit integers.  Serializing arrays of these types should be the job of the built-in function mat2str, which we do lean on here, but only for the shorter integer types, since it fails in several ways for the other numeric types.

First, the nit-picky stuff: I should emphasize that my goal is “round-trip” reproducibility; that is, after converting to string and back, we want the underlying bytes representing the numeric values to be unchanged.  Precision is one issue: for some reason, MATLAB’s default seems to be 15 decimal digits, which isn’t enough– by two— to accurately reproduce all double precision values.  Granted, this is an optional argument to mat2str, which effectively uses sprintf('%.17g',x) under its hood, but Java’s algorithm does a better job of limiting the number of digits that are actually needed for any given value.

Other reasons to bypass mat2str are that (1) for some reason it explicitly “erases” negative zero, and (2) it still doesn’t quite accurately handle complex numbers involving NaNalthough it has improved in recent releases.  Witness eval(mat2str(complex(0, nan))), for example.  (My implementation isn’t perfect here, either, though; there are multiple representations of NaN, but this function strips any payload.)

But MATLAB’s behavior with 64-bit integer types is the most interesting of all, I think.  Imagine things from the parser’s perspective: any numeric literal defaults to double precision, which, without a decimal point or fractional part, we can think of as “almost” an int54.  There is no separate syntax for integer literals; construction of “literal” values of the shorter (8-, 16-, and 32-bit) integer types effectively casts from that double-precision literal to the corresponding integer type.

But for uint64 and int64, this doesn’t work… and for a while (until around R2010a), it really didn’t work– there was no way to directly construct a 64-bit integer larger than 2^53, if it wasn’t a power of two!

This behavior has been improved somewhat since then, but at the expense of added complexity in the parser: the expression [u]int64(expr) is now a special case, as long as expr is an integer literal, with no arithmetic, imaginary part, etc.  Even so much as a unary plus will cause a fall back to the usual cast-from-double.  (It appears that Octave, at least as of version 4.0.3, has not yet worked this out.)

The effect on this serialization function is that we have to wrap that explicit uint64 or int64 construction around each individual integer scalar, instead of a single cast of the entire array expression as we can do with all of the other numeric types.

Function handles

Finally, function handles are also special.  First, they must be scalar (i.e., 1×1), most likely due to the language syntax ambiguity between array indexing and function application.  But function handles also can have workspace variables associated with them– usually when created anonymously– and although an existing function handle and its associated workspace can be inspected, there does not appear to be a way to create one from scratch in a single evaluatable expression.

 

Posted in Uncategorized | 6 Comments