Horse race puzzle

Introduction

This post is part pedagogical rant, part discussion of a beautiful technique in combinatorics, both motivated by a recent exchange with a high school student, about an interesting dice game that seems to be a common introductory exercise in probability:

There are 12 horses, numbered 1 through 12, each initially at position 0 on a track.  Play consists of a series of turns: on each turn, the teacher rolls two 6-sided dice, where the sum of the dice indicates which horse to advance one position along the track.  The first horse to reach position n=5 wins the race.

At first glance, this seems like a nice exercise.  Students quickly realize, for example, that horse #1 is a definite loser– the sum of two dice will never equal 1– and that horse #7 is the best bet to win the race, with the largest probability (1/6) of advancing on any given turn.

But what if a student asks, as this particular student did, “Okay, I can see how to calculate the distribution of probabilities of each horse advancing in a single turn, but what about the probabilities of each horse winning the race, as a function of the race length n?”  This makes me question whether this is indeed such a great exercise, at least as part of an introduction to probability.  What started as a fun game and engaging discussion has very naturally led to a significantly more challenging problem, whose solution is arguably beyond most students– and possibly many teachers as well– at the high school level.

I like this game anyway, and I imagine that I would likely use it if I were in a similar position.  Although the methods involved in an exact solution might be inappropriate at this level, the game still lends itself nicely to investigation via Monte Carlo simulation, especially for students with a programming bent.

Poissonization

There is an exact solution, however, via several different approaches.  This problem is essentially a variant of the coupon collector’s problem in disguise: if each box of cereal contains one of 12 different types of coupons, then if I buy boxes of cereal until I have n=5 of one type of coupon, what is the probability of stopping with each type of coupon?  Here the horses are the coupon types, and the dice rolls are the boxes of cereal.

As in the coupon collector’s problem, it is helpful to modify the model of the horse race in a way that, at first glance, seems like unnecessary additional complexity: suppose that the dice rolls occur at times distributed according to a Poisson process with rate 1.  Then the advances of each individual horse (that is, the subsets of dice rolls with each corresponding total) are also Poisson processes, each with rate equal to the probability p_i of the corresponding dice roll.

Most importantly, these individual processes are independent, meaning that we can easily compute the probability of desired states of the horses’ positions on the track at a particular time, as the product of the individual probabilities for each horse.  Integrating over all time yields the desired probability that horse j wins the race:

P(j) = \displaystyle\int_{0}^{\infty} p_j \frac{e^{-p_j t}(p_j t)^{n-1}}{(n-1)!} \prod\limits_{i \neq j} \sum\limits_{k=0}^{n-1} \frac{e^{-p_i t}(p_i t)^k}{k!} dt

Intuitively, horse j advances on the final dice roll, after exactly n-1 previous advances, while each of the other horses has advanced at most n-1 times.

Generating functions

This “Poissonization” trick is not the only way to solve the problem, and in fact may be less suitable for implementation without a sufficiently powerful computer algebra system.  Generating functions may also be used to “encode” the possible outcomes of dice rolls leading to victory for a particular horse, as follows:

G_j(x) = p_j \frac{(p_j x)^{n-1}}{(n-1)!} \prod\limits_{i \neq j} \sum\limits_{k=0}^{n-1} \frac{(p_i x)^k}{k!} 

where the probability that horse j wins on the m+1-st dice roll is m! times the coefficient of x^m in G_j(x).  Adding up these probabilities for all possible m yields the overall probability of winning.  This boils down to simple polynomial multiplication and addition, allowing relatively straightforward implementation in Python, for example.

The results are shown in the following figure.  Each curve corresponds to a race length, from n=1 in black– where the outcome is determined by a single dice roll– to n=6 in purple.

Probability distribution of each horse winning, with each curve corresponding to a race length n from 1 to 6.

As intuition might suggest, the longer the race, the more likely the favored horse #7 is to win.  This is true for any non-uniformity in the single-turn probability distribution.  For a contrasting example, consider a race with just 6 horses, with each turn decided by a single die roll.  This race is fair no matter how long it is; every horse always has the same probability of winning.  But if the die is loaded, no matter how slightly, then the longer the race, the more advantage to the favorite.

 

 

Posted in Uncategorized | 3 Comments

The hardest 24 puzzles

Introduction

Once again motivated by a series of interesting posts by Mark Dominus, a “24 puzzle” is a set of 4 randomly selected numbers from 1 to 9, where the objective is to arrange the numbers in an arithmetic expression using only addition, subtraction, multiplication, and division, to yield the value 24.  For example, given the numbers (3, 5, 5, 9), one solution is

5(3 + \frac{9}{5}) = 24

Solutions are in general not unique; for example, another possibility is

3(9 - \frac{5}{5}) = 24

This is a great game for kids, and it can be played with no more equipment than a standard deck of playing cards: remove the tens and face cards, shuffle the remaining 36 cards, and deal 4 cards to “generate” a puzzle.  Or keep all 52 cards, and generate potentially more difficult puzzles involving numbers from 1 to 13 instead of 1 to 9.

Or you could play the game using a different “target” value other than 24… but should you?  That is, is there anything special about the number 24 that makes it more suitable as a target value than, say, 25, or 10, etc.?  And whatever target value we decide to use, what makes some puzzles (i.e., sets of numbers) more difficult to solve than others?  What are the hardest puzzles?  Finally, subtraction is one of the allowed binary operations; what about unary minus (i.e., negation)?  Is this allowed?  Does it matter?  These are the sort of questions that make a simple children’s game a great source of interesting problems for both mathematics and computer science students.

(Aside: Is it “these are the sort of questions” or “these are the sorts of questions”?  I got embarrassingly derailed working on that sentence.  I could have re-worded to avoid the issue entirely, but it’s interesting enough that I choose to leave it in.)

Enumerating possible expressions

Following is my Mathematica implementation of a 24 puzzle “solver”:

trees[n_Integer] := trees[n] =
  If[n == 0, {N},
   Flatten@Table[Outer[Star,
      trees[k], trees[n - 1 - k]],
     {k, 0, n - 1}]]

sub[expr_, values_List, ops_List] :=
 Quiet@Fold[
   ReplacePart[#1,
     MapThread[Rule, {Position[#1, First[#2]], Last[#2]}]] &,
   expr,
   {{N, values}, {Star, ops}}]

search[visit_, values_List, ops_List] :=
 Outer[visit,
  trees[Length[values] - 1],
  Permutations[values],
  Tuples[ops, Length[values] - 1], 1]

The function trees[n] enumerates all possible expression trees involving n binary operations, which are counted by the Catalan numbers.  Each expression tree is just a “template,” with placeholders for the numbers and operators that will be plugged in using the function sub.  For example, a standard 24 puzzle with 4 numbers requires n=3 operators, in one of the following 5 patterns:

N * (N * (N * N))
N * ((N * N) * N)
(N * N) * (N * N)
(N * (N * N)) * N
((N * N) * N) * N

The function search takes a puzzle represented as a set of numbers and set of available operators, and simply explores the outer product of all possible expression trees, permutations of numbers, and selections of operators, “visiting” each resulting expression in turn.

The choice of visitor depends on the question we want to answer.  For example, the following code solves a given puzzle for a given target value, with a visitor that checks each evaluated expression’s value against the target, and pretty-prints the expression if it matches:

show[expr_, values_List, ops_List] :=
 ToExpression[
  ToString@sub[expr, ToString /@ values, ToString /@ ops],
  InputForm, HoldForm]

solve[target_Integer, values_List, ops_List] :=
 Reap@search[
     If[sub[##] == target, Sow@show[##]] &,
     values, ops] // Last // Flatten

But another useful visitor is just sub itself, in which case search computes the set of all possible values that can be made from all possible arithmetic arrangements of the given numbers and operators.  We can use this information in the following sections.

Why 24?

Suppose that we draw 4 random cards from a deck of 36 cards (with face cards removed); what is the probability that the resulting puzzle is solvable?  The answer depends on the target– are we trying to find an expression that evaluates to 24, or to some other value?  The following figure shows the probability that a randomly selected puzzle is solvable, as a function of the target value.

Probability that a randomly selected puzzle, as dealt from a 36-card deck, is solvable, vs. the target value (usually 24).

The general downward trend makes sense: it’s more difficult to make larger numbers.  But most interesting are the targets that are multiples of 12 (highlighted by the vertical grid lines), whose corresponding probabilities are distinctly higher than their neighbors.  This also makes sense, at least in hindsight (although I doubt I would have predicted this behavior): multiples of 12 have a relatively large number of factors, allowing more possible ways to be “built.”

So this explains at least in part why 24 is “the” target value… but why not 12, for example, especially since it has an even higher probability of being solvable (i.e., an even lower probability of frustrating a child playing the game)?  The problem is that the target of 12 seems to be too easy, as the following figure shows, indicating for each target the expected number of different solutions to a randomly selected solvable puzzle:

Expected number of solutions to a randomly selected puzzle, conditioned on the puzzle being solvable, vs. the target value.

Of course, this just pushes the discussion in the other direction, asking whether a larger multiple of 12, like 36, for example, wouldn’t be an even better target value, allowing “difficult” puzzles while still having an approximately 84% probability of being solvable.  And it arguably would be, at least for more advanced players or students.

More generally, the following figure shows these two metrics together, with the expected number of solutions on the x-axis, and the probability of solvability on the y-axis, for each target value, with a few highlighted alternative target values along/near the Pareto frontier:

Probability of solvability vs. expected number of solutions.

The hardest 24 puzzles

Finally, which 24 puzzles are the hardest to solve?  The answer depends on the metric for difficulty, but one reasonable choice is the number of distinct solutions.  That is, among all possible expression trees, permutations of the numbers in the puzzle, and choices of available operators, how many yield the desired target value of 24?  The fewer the possible arrangements that work, the more difficult the puzzle.

It turns out that there are relatively few puzzles that have a unique solution, with exactly one possible arrangement of numbers and operators that evaluates to 24.  The list is below, where for completeness I have included all puzzles involving numbers up to 13 instead of just single digits.  (It’s worth noting that Mark’s example— which is indeed difficult– of arranging (2, 5, 6, 6) to yield 17, would not make this list.  And some of the puzzles that are on this list are arguably pretty easy, suggesting that there is something more to “hardness” than just uniqueness.)

  • (1, 2, 7, 7)
  • (1, 3, 4, 6)
  • (1, 5, 11, 11)
  • (1, 6, 6, 8)
  • (1, 7, 13, 13)
  • (1, 8, 12, 12)
  • (2, 3, 5, 12)
  • (3, 3, 5, 5)
  • (3, 3, 8, 8)
  • (4, 4, 10, 10)
  • (5, 5, 5, 5)
  • (5, 5, 8, 8)
  • (5, 5, 9, 9)
  • (5, 5, 10, 10)
  • (5, 5, 11, 11)
  • (5, 5, 13, 13)

And one more: (3, 4, 9, 10), although this one is special.  It has no solution involving only addition, subtraction, multiplication, and division.  For this puzzle, we must expand the set of available operators to also include exponentiation… and then the solution is unique.

Posted in Uncategorized | Leave a comment

Anagrams

Introduction

This was a fun exercise, motivated by several interesting recent posts by Mark Dominus at The Universe of Discourse about finding anagrams of individual English words, such as (relationshipsrhinoplasties), and how to compute a “score” for such anagrams by some reasonable measure of the complexity of the rearrangement, so that (attentivenesstentativeness), with a common 8-letter suffix, may be viewed as less “interesting” than, say, the more thoroughly shuffled (microclimates, commercialist).

The proposed scoring metric is the size of a “minimum common string partition” (MCSP): what is the fewest number of blocks of consecutive letters in a partition of the first word that may be permuted and re-concatenated to yield the second word?  For example, the above word attentiveness may be partitioned into 3 blocks, at+tent+iveness, and transposing the first two blocks yields tent+at+iveness.  Thus, the score for this anagram is only 3.  Compare this with the score of 12 for (intolerances, crenelations), where all 12 letters must be rearranged.

Computing MCSP

I wanted to experiment with this idea in a couple of different ways.  First, as Mark points out, the task of finding the anagrams themselves is pretty straightforward, but computing the resulting MCSP scores is NP-complete.  Fortunately, there is a nice characterization of the solution– essentially the same “brute force” approach described by Mark– that allows concise and reasonably efficient implementation.

Consider an anagram of two words (w_1, w_2) with n letters each, where the necessary rearrangement of letters of w_1 to produce w_2 is specified by a permutation

\pi:\{1,2,\ldots,n\} \rightarrow \{1,2,\ldots,n\}

where the i-th letter in w_2 is the \pi(i)-th letter in w_1.  This permutation of individual letters corresponds to a permutation of blocks of consecutive letters, where the number of such blocks– the MCSP score– is

s(\pi) = n - \left|\{i<n:\pi(i)+1=\pi(i+1)\}\right|

Computing an MCSP is hard because this permutation transforming w_1 into w_2 is not necessarily unique; we need the permutation that minimizes s(\pi).  The key observation is that each candidate permutation may be decomposed into \pi = \pi_2 \pi_1^{-1}, where \pi_j transforms any canonical (e.g., sorted) ordering of letters into w_j.  So we can fix, say, \pi_2, and the enumeration of possible \pi_1 is easy to express, since we are using the sorted list of letters as our starting point.

The following Mathematica function implements this approach:

anagramScore[w1_String, w2_String] :=
 Module[
  {s1 = Characters[w1], s2 = Characters[w2], p1, p2, i},
  p1 = Ordering@Ordering[s1];
  p2 = Ordering@Ordering[s2];
  Length[s1] - Max@Outer[
     Count[
       Differences[Ordering[ReplacePart[p1, {##} // Flatten]][[p2]]],
       1] &,
     Sequence @@ Map[
       (i = Position[s1, #] // Flatten;
         Thread[i -> #] & /@ Permutations[p1[[i]]]) &,
       Union[s1]
       ], 1]]

Using this, we find, as Mark does, that an anagram with maximum MCSP score of 14 is (cinematographer, megachiropteran)… along with the almost-as-interesting (involuntariness, nonuniversalist), but also other fun ones farther down the list, such as (enthusiastic, unchastities) with a score of 9.

Scoring Anagrams Using MCSP and Frequency

From Mark’s post:

Clearly my chunk score is not the end of the story, because “notaries / senorita” should score better than “abets / baste” (which is boring) or “Acephali / Phacelia” (whatever those are), also 5-pointers. The length of the words should be worth something, and the familiarity of the words should be worth even more [my emphasis].

The problem is that an MCSP score alone is a pretty coarse metric, since it’s an integer bounded by the length of the words in the dictionary.  So the second idea was to refine the ordering of the list of anagrams as Mark suggests, with a lexicographic sort first by MCSP score, then by (average) frequency of occurrence in language, as estimated using the Google Books Ngrams data set (methodology described in more detail here).  The expectation was that this would make browsing a long list easier, with more “recognizable” anagrams appearing together near the beginning of each MCSP grouping.

However, because I wanted to try to reproduce Mark’s results, I also needed a larger dictionary that contained, for example, megachiropteran. (which, by the way, is a bat that can have a wing span of over 5 feet).  I used the American English version of the Spell Checker Oriented Word List (SCOWL), combined with the Scrabble and ENABLE2k word lists used in similar previous experiments– which, interestingly, alone contain many anagrams not found in the earlier list.  (The SCOWL was really only needed to “reach” megachiropteran; with the exception of it and nonuniversalist, all of the other examples in this post are valid Scrabble words!).  The resulting word lists and corresponding counts of occurrences in the Ngrams data set are available here.

The resulting list of anagrams are in the text file linked below, sorted by MCSP score, then by the average frequency of the pair of words in each anagram.  Interesting examples high on the list are (personality, antileprosy) with a score of 11, (industries, disuniters) with a score of 10, etc.

The full list of 82,776 anagrams sorted by MCSP and frequency

The individual word frequencies are included in the list, to allow investigation of other sorting methods.  For example, it might be useful to normalize MCSP score by word length.  Or instead of using the average frequency of the two words in an anagram, the ratio of frequencies would more heavily weight anagrams between a really common word and a relatively unknown one, such as (penalties, antisleep)– I have never heard of the latter, but both are Scrabble-playable.

References:

  1. Dominus, M., I found the best anagram in English, The Universe of Discourse, 21 February 2017 [HTML]
  2. Goldstein, A., Kolman, P., Zheng, J., Minimum Common String Partition Problem: Hardness and Approximations, Electronic Journal of Combinatorics, 12 (2005), #R50 [PDF]
Posted in Uncategorized | Leave a comment

Array resizing in MATLAB

I encountered the following MATLAB code recently, simplified for this discussion; it builds a 3-by-4-by-2 array by assigning each of its three 4-by-2 “block” sub-arrays in turn to an initially empty array:

sz = [3, 4, 2];
block = reshape(0:(prod(sz(2:end)) - 1), sz(2:end));
a = [];
for k = 1:sz(1)
    a(k,:,:) = block; block = block + numel(block);
end

As this example shows, MATLAB allows resizing arrays on the fly, so to speak; assignment to an element– or in this case, a range of elements– whose indices would normally be out of bounds automatically re-allocates memory to accommodate the new, larger array.  However, these incremental re-allocation and re-copying operations can be costly in terms of execution time.  Granted, this toy example is small enough not to matter, but in the “real” example, the eventual array size was approximately sz=[256,16384,4], in which case executing the above code takes about 8 seconds on my laptop.

Pre-allocation is usually recommended as a fix: that is, size the entire array in advance, before assigning any elements.  But what if you don’t know the size of the array in advance?  Although there are several approaches to dealing with this situation, with varying complexity, the subject of this post is to describe just how much execution time may be eliminated by only a modest change to the above code.

A major source of slowness is that the array expansion occurs along the first dimension… which in MATLAB, with its Fortran roots, is the index that changes most frequently in the underlying block of memory containing the array elements.  So not only must we re-allocate memory to accommodate each new sub-array, even copying the existing array elements is a non-trivial task, as the following figure shows.  As we “append” first the cyan block, then the red block, then the green block, each incremental re-allocation also requires “re-interleaving” the old and new elements in the new larger block of memory:

Expanding an eventual 3x8 array in MATLAB along the first dimension, by assigning rows.

Expanding an eventual 3x4x2 array in MATLAB along the first dimension.

Instead, consider the following code, modified to append each block along the last— and most slowly changing– dimension, followed by the appropriate transposition via permute to shuffle everything back into the originally intended shape:

a = [];
for k = 1:sz(1)
    a(:,:,k) = block; block = block + numel(block);
end
a = permute(a, [3, 1, 2]);

The effect of this change is that, although we still incur all of the incremental re-allocation cost, each memory copy operation is a more straightforward and much faster “blit”:

Expanding an eventual 3x4x2 array in MATLAB along the last dimension, followed by transposing to yield the desired order of dimensions.

Expanding an eventual 3x4x2 array in MATLAB along the last dimension, followed by transposing to yield the desired order of dimensions.

This version executes in less than a quarter of a second on my laptop, about 35 times faster than the original 8 seconds.

Posted in Uncategorized | 4 Comments

Guess the number

I haven’t posted a puzzle in a while.  The following problem has the usual nice characteristics; it works on a cocktail napkin or as a programming problem, via exact solution or simulation, etc.

I am thinking of a randomly selected integer between 1 and m=10 (inclusive).  You are the first of n=3 players who will each, in turn, get a single guess at the selected number.  The player whose guess is closest wins $300, with ties splitting the winnings evenly.

Here is the catch: each player may not guess a number that has already been guessed by a previous player.  As the first player, what should your strategy be?  Which player, if any, has the advantage?  And what happens if we vary m and n?

Posted in Uncategorized | Leave a comment

Probability of a Scrabble bingo

My wife and I have been playing Scrabble recently.  She is much better at the game than I am, which seems to be the case with most games we play.  But neither of us are experts, so that bingos— playing all 7 tiles from the rack in a single turn, for a 50-point bonus– are rare.  I wondered just how rare they should be… accounting for the fact that I am a novice player?

Let’s focus the problem a bit, and just consider the first turn of the game, when there are no other tiles on the board: what is the probability that 7 randomly drawn Scrabble tiles may be played to form a valid 7-letter word?

There are {100 \choose 7}, or over 16 billion equally likely ways to draw a rack of 7 tiles from the 100 tiles in the North American version of the game.  But since some tiles are duplicated, there are only 3,199,724 distinct possible racks (not necessarily equally likely).  Which of these racks form valid words?

It depends on what we mean by valid.  According to the 2014 Official Tournament and Club Word List (the latest for which an electronic version is accessible), there are 25,257 playable words with 7 letters… but many of those are words that I don’t even know, let alone expect to be able to recognize from a scrambled rack of tiles.  We need a way to reduce this over-long official list of words down to a “novice” list of words– or better yet, rank the entire list from “easiest” to “hardest,” and compute the desired probability as a function of the size of the accepted dictionary.

The Google Books Ngrams data set (English version 20120701) provides a means of doing this.  As we have done before (described here and here), we can map each 7-letter Scrabble word to its frequency of occurrence in the Google Books corpus, the idea being that “easier” words occur more frequently than “harder” words.

The following figure shows the sorted number of occurrences of all 7-letter Scrabble words on a logarithmic scale, with some highlighted examples, ranging from between, the single most frequently occurring 7-letter Scrabble word, to simioid, one of the least frequently occurring words… and this doesn’t even include the 1444 playable words– about 5.7% of the total– that appear nowhere in the entire corpus, such as abaxile and zygoses.

Scrabble 7-letter words ranked by frequency of occurrence in Google Books Ngrams data set.

Scrabble 7-letter words ranked by frequency of occurrence in Google Books Ngrams data set.  The least frequent word shown here that I recognize is “predate.”

Armed with this sorted list of 25,257 words, we can now compute, as a function of n \leq 25257, the probability that a randomly drawn rack of 7 tiles may be played to form one of the n easiest words in the list.  Following is Mathematica code to compute these probabilities.  This would be slightly simpler– and much more efficient– if not for the wrinkle of dealing with blank tiles, which allow multiple different words to be played from the same rack of tiles.

tiles = {" " -> 2, "a" -> 9, "b" -> 2, "c" -> 2, "d" -> 4, "e" -> 12,
   "f" -> 2, "g" -> 3, "h" -> 2, "i" -> 9, "j" -> 1, "k" -> 1, "l" -> 4,
   "m" -> 2, "n" -> 6, "o" -> 8, "p" -> 2, "q" -> 1, "r" -> 6, "s" -> 4,
   "t" -> 6, "u" -> 4, "v" -> 2, "w" -> 2, "x" -> 1, "y" -> 2, "z" -> 1};

{numBlanks, numTiles} = {" " /. tiles, Total[Last /@ tiles]};

racks[w_String] := Map[
  StringJoin@Sort@Characters@StringReplacePart[w, " ", #] &,
  Map[{#, #} &, Subsets[Range[7], numBlanks], {2}]]

draws[r_String] :=
 Times @@ Binomial @@ Transpose[Tally@Characters[r] /. tiles]

all = {};
p = Accumulate@Map[(
       new = Complement[racks[#], all];
       all = Join[all, new];
       Total[draws /@ new]
       ) &,
     words] / Binomial[numTiles, 7];

The results are shown in the following figure, along with another sampling of specific playable words.  For example, if we include the entire official word list, the probability of drawing a playable 7-letter word is 21226189/160075608, or about 0.132601.

Probability that 7 randomly drawn tiles form a word, vs. dictionary size.

Probability that 7 randomly drawn tiles form a word, vs. dictionary size.

A coarse inspection of the list suggests that I confidently recognize only about 8 or 9 thousand– roughly a third– of the available words, meaning that my probability of playing all 7 of my tiles on the first turn is only about 0.07.  In other words, instead of a first-turn bingo every 7.5 games or so on average, I should expect to have to wait nearly twice as long.  We’ll see if I’m even that good.

Posted in Uncategorized | Leave a comment

Risk of (gambler’s) ruin

Suppose that you start with an initial bankroll of m dollars, and repeatedly make a wager that pays $1 with probability p, and loses $1 with probability 1-p.  What is the risk of ruin, i.e., the probability that you will eventually go broke?

This is the so-called gambler’s ruin problem.  It is a relatively common exercise to show that if p \leq 1/2, the probability of ruin is 1, and if p > 1/2, then the probability is

(\frac{1-p}{p})^m

But what if the wager is not just win-or-lose a dollar, but is instead specified by an arbitrary probability distribution of outcomes?  For example, suppose that at each iteration, we may win any of (-2,-1,0,+1,+2) units, with respective probabilities (1/15,2/15,3/15,4/15,5/15).  The purpose of this post is to capture my notes on some seemingly less well-known results in this more general case.

(The application to my current study of blackjack betting is clear: we have shown that, at least for a shoe game, even if we play perfectly, we are still going to lose if we don’t vary our bet.  We can increase our win rate by betting more in favorable situations… but a natural constraint is to limit our risk of ruin, or probability of going broke.)

Schlesinger (see Reference (2) below) gives the following formula for risk of ruin, due to “George C., published on p. 8 of ‘How to Win $1 Million Playing Casino Blackjack'”:

(\frac{1 - \frac{\mu}{\sigma}}{1 + \frac{\mu}{\sigma}})^\frac{m}{\sigma}

where \mu and \sigma are the mean and standard deviation, respectively, of the outcome of each round (or hourly winnings).  It is worth emphasizing, since it was unclear to me from the text, that this formula is an approximation, albeit a pretty good one.  The derivation is not given, but the approach is simple to describe: normalize the units of both bankroll and outcome of rounds to have unit variance (i.e., divide everything by \sigma), then use the standard two-outcome ruin probability formula above with win probability p chosen to reflect the appropriate expected value of the round, i.e., p - (1-p) = \mu / \sigma.

The unstated assumption is that 0 < \mu < \sigma (note that ruin is guaranteed if \mu < 0, or if \mu = 0 and \sigma > 0), and that accuracy of the approximation depends on \mu \ll \sigma \ll m, which is fortunately generally the case in blackjack.

There is an exact formula for risk of ruin, at least as long as outcomes of each round are bounded and integral.  In Reference (1) below, Katriel describes a formula involving the roots inside the complex unit disk of the equation

\sum p_k z^k = 1

where p_k is the probability of winning k units in each round.  Execution time and numeric stability make effective implementation tricky.

Finally, just to have some data to go along with the equations, following is an example of applying these ideas to analysis of optimal betting in blackjack.  Considering the same rules and setup as in the most recent posts (6 decks, S17, DOA, DAS, SPL1, no surrender, 75% penetration), let’s evaluate all possible betting ramps with a 1-16 spread through a maximum (floored) true count of 10, for each of five different betting and playing strategies, ranging from simplest to most complex:

  1. Fixed basic “full-shoe” total-dependent zero-memory strategy (TDZ), using Hi-Lo true count for betting only.
  2. Hi-Lo with the Illustrious 18 indices.
  3. Hi-Lo with full indices.
  4. Hi-Opt II with full indices and ace side count.
  5. “Optimal” betting and playing strategy, where playing strategy is CDZ- optimized for each pre-deal depleted shoe, and betting strategy is ramped according to the corresponding exact pre-deal expected return.

Then assuming common “standard” values of $10,000 initial bankroll, a $10 minimum bet, and 100 hands per hour, the following figure shows the achievable win rate ($ per hour) and corresponding risk of ruin for each possible strategy and betting ramp:

Win rate vs. risk of ruin for various betting and playing strategies.

Win rate vs. risk of ruin for various betting and playing strategies.

There are plenty of interesting things to note and investigate here.  The idea is that we can pick a maximum acceptable risk of ruin– such as the red line in the figure, indicating the standard Kelly-derived value of 1/e^2, or about 13.5%– and find the betting ramp that maximizes win rate without exceeding that risk of ruin.  Those best win rates for this particular setup are:

  1. Not achievable for TDZ (see below).
  2. $20.16/hr for Hi-Lo I18.
  3. $21.18/hr for Hi-Lo full.
  4. $26.51/hr for Hi-Opt II.
  5. $33.09/hr for optimal play.

Fixed basic TDZ strategy, shown in purple, just isn’t good enough; that is, there is no betting ramp with a risk of ruin smaller than about 15%.  And some betting ramps, even with the 1-16 spread constraint, still yield a negative overall expected return, resulting in the “tail” at P(ruin)=1.  (But that’s using Hi-Lo true count as the “input” to the ramp; it is possible that “perfect” betting using exact pre-deal expected return could do better.)

References:

  1. Katriel, Guy, Gambler’s ruin probability – a general formula, arXiv:1209.4203v4 [math.PR], 2 July 2013
  2. Schlesinger, Don, Blackjack Attack: Playing the Pros’ Way, 3rd ed. Las Vegas: RGE Publishing, Ltd., 2005
Posted in Uncategorized | Leave a comment