Whodunit logic puzzle

You are a detective investigating a robbery, with five suspects having made the following statements:

  • Paul says, “Neither Steve nor Ted was in on it.”
  • Quinn says, “Ray wasn’t in on it, but Paul was.”
  • Ray says, “If Ted was in on it, then so was Steve.”
  • Steve says, “Paul wasn’t in on it, but Quinn was.”
  • Ted says, “Quinn wasn’t in on it, but Paul was.”

You do not know which, nor even how many, of the five suspects were involved in the crime. However, you do know that every guilty suspect is lying, and every innocent suspect is telling the truth. Which suspect or suspects committed the crime?

I think puzzles similar to this one make good, fun homework problems in a discrete mathematics course introducing propositional logic. However, this particular puzzle is a bit more complex than the usual “Which one of three suspects is guilty?” type, not just because there are more suspects, but also because we don’t know how many suspects are guilty.

That added complexity is motivated by trying to transform this typically pencil-and-paper mathematical logic problem into a potentially nice computer science programming exercise: consider writing a program to automate solving this problem… or even better, writing a program to generate new random instances of problems like this one, while ensuring that the resulting puzzle has some reasonably “nice” properties. For example, the solution should be unique; but the puzzle should also be “interesting,” in that we should need all of the suspects’ statements to deduce who is guilty (that is, any proper subset of the statements should imply at least two distinct possible solutions).

Posted in Uncategorized | 2 Comments

Coupling from the past


This is a follow-up to one of last month’s posts that contained some images of randomly-generated “lozenge tilings.” The focus here is not on the tilings themselves, but on the perfectly random sampling technique due to Propp and Wilson known as “coupling from the past” (see references below) that was used to generate them. As is often the case here, I don’t have any new mathematics to contribute; the objective of this post is just to go beyond the pseudo-code descriptions of the technique in most of the literature, and provide working Python code with a couple of different example applications.

The basic problem is this: suppose that we want to sample from some probability distribution \pi over a finite discrete space S, and that we have an ergodic Markov chain whose steady state distribution is \pi. An approximate sampling procedure is to pick an arbitrary initial state, then just run the chain for some large number of iterations, with the final state as your sample. This idea has been discussed here before, in the context of card shuffling, the board game Monopoly, as well as the lozenge tilings from last month.

For example, consider shuffling a deck of cards using the following iterative procedure: select a random adjacent pair of cards, and flip a coin to decide whether to put them in ascending or descending order (assuming any convenient total ordering, e.g. first by rank, then by suit alphabetically). Repeat for some “large” number of iterations.

There are two problems with this approach:

  1. It’s approximate; the longer we iterate the chain, the closer we get to the steady state distribution… but (usually) no matter when we stop, the distribution of the resulting state is never exactly \pi.
  2. How many iterations are enough? For many Markov chains, the mixing time may be difficult to analyze or even estimate.

Coupling from the past

Coupling from the past can be used to address both of these problems. For a surprisingly large class of applications, it is possible to sample exactly from the stationary distribution of a chain, by iterating multiple realizations of the chain until they are “coupled,” without needing to know ahead of time when to stop.

First, suppose that we can express the random state transition behavior of the chain using a fixed, deterministic function f:S \times [0,1) \to S, so that for a random variable U uniformly distributed on the unit interval,

\forall i,j\ P(f(s_i, U)=s_j) = p_{i,j}

In other words, given a current state X_t and random draw U_t, the next state is X_{t+1}=f(X_t, U_t).

Now consider a single infinite sequence of random draws (u_1, u_2, u_3, \ldots). We will use this same single source of randomness to iterate multiple realizations of the chain, one for each of the |S| possible initial states… but starting in the past, and running forward to time zero. More precisely, let’s focus on two particular chains with different initial states s_i and s_j at time -n, so that

X_{-n}=s_i, Y_{-n}=s_j

X_{-(n-1)}=f(X_{-n}, u_n), Y_{-(n-1)}=f(Y_{-n}, u_n)


X_{-1}=f(X_{-2}, u_2), Y_{-1}=f(Y_{-2}, u_2)

X_0=f(X_{-1}, u_1), Y_0=f(Y_{-1}, u_1)

(Notice how the more random draws we generate, the farther back in time they are used. More on this shortly.)

A key observation is that if X_t=Y_t at any point, then the chains are “coupled:” since they experience the same sequence of randomness influencing their behavior, they will continue to move in lockstep for all subsequent iterations as well, up to X_0=Y_0. Furthermore, if the states at time zero are all the same for all possible initial states run forward from time -n, then the distribution of that final state X_0 is the stationary distribution of the chain.

But what if all initial states don’t end at the same final state? This is where the single source of randomness is key: we can simply look farther back in the past, say 2n time steps instead of n, by extending our sequence of random draws… as long as we re-use the existing random draws to make the same “updates” to the states at later times (i.e., closer to the end time zero).

Monotone coupling

So all we need to do to perfectly sample from the stationary distribution is

  1. Choose a sufficiently large n to look far enough into the past.
  2. Run |S| realizations of the chain, one for each initial state, all starting at time -n and running up to time zero.
  3. As long as the final state X_0 is the same for all initial states, output X_0 as the sample. Otherwise, look farther back in the past (n \leftarrow 2n), generate additional random draws accordingly, and go back to step 2.

This doesn’t seem very helpful, since step 2 requires enumerating elements of the typically enormous state space. Fortunately, in many cases, including both lozenge tiling and card shuffling, it is possible to simplify the procedure by imposing a partial order \preceq on the state space– with minimum and maximum elements– that is preserved by the update function f. That is, suppose that for all states s_i,s_j \in S and random draws u, if s_i \preceq s_j, then f(s_i, u) \preceq f(s_j,u).

Then instead of running the chain for all possible initial states, we can just run two chains, starting with the minimum and maximum elements in the partial order. If those two chains are coupled by time zero, then they will also “squeeze” any other initial state into the same coupling.

Following is the resulting Python implementation:

import random

def monotone_cftp(mc):
    """Return stationary distribution sample using monotone CFTP."""
    updates = [mc.random_update()]
    while True:
        s0, s1 = mc.min_max()
        for u in updates:
        if s0 == s1:
        updates = [mc.random_update()
                   for k in range(len(updates) + 1)] + updates
    return s0

Sort of like the Fisher-Yates shuffle, this is one of those algorithms that is potentially easy to implement incorrectly. In particular, note that the list of “updates,” which is traversed in order during iteration of the two chains, is prepended with additional blocks of random draws as needed to start farther back in the past. The figure below shows the resulting behavior, mapping each output of the random number generator to the time at which it is used to update the chains.

Random draws in the order they are generated, vs. the order in which they are used in state transitions.

Coming back once again to the card shuffling example, the code below uses coupling from the past with random adjacent transpositions to generate an exactly uniform random permutation p. (A similar example generating random lozenge tilings may be found at the usual location here.)

class Shuffle:
    def __init__(self, n, descending=False):
        self.p = list(range(n))
        if descending:

    def min_max(self):
        n = len(self.p)
        return Shuffle(n, False), Shuffle(n, True)

    def __eq__(self, other):
        return self.p == other.p

    def random_update(self):
        return (random.randint(0, len(self.p) - 2),
                random.randint(0, 1))

    def update(self, u):
        k, c = u
        if (c == 1) != (self.p[k] < self.p[k + 1]):
            self.p[k], self.p[k + 1] = self.p[k + 1], self.p[k]


The minimum and maximum elements of the partial order are the identity and reversal permutations, as might be expected; but it’s an interesting puzzle to determine the actual partial order that these random adjacent transpositions preserve. Consider, for example, a similar process where we select a random adjacent pair of cards, then flip a coin to decide whether to swap them or not (vs. whether to put them in ascending or descending order). This process has the same uniform stationary distribution, but won’t work when applied to monotone coupling from the past.

Re-generating vs. storing random draws

One final note: although the above implementation is relatively easy to read, it may be prohibitively expensive to store all of the random draws as they are generated. The slightly more complex implementation below is identical in output behavior, but is more space-efficient by only storing “markers” in the random stream, re-generating the draws themselves as needed when looking farther back into the past.

import random

def monotone_cftp(mc):
    """Return stationary distribution sample using monotone CFTP."""
    updates = [(random.getstate(), 1)]
    while True:
        s0, s1 = mc.min_max()
        rng_next = None
        for rng, steps in updates:
            for t in range(steps):
                u = mc.random_update()
            if rng_next is None:
                rng_next = random.getstate()
        if s0 == s1:
        updates.insert(0, (rng_next, 2 ** len(updates)))
    return s0



  1. Propp, J. and Wilson, D., Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics, Random Structures and Algorithms, 9 1996, p. 223-252 [PDF]
  2. Wilson, D., Mixing times of lozenge tiling and card shuffling Markov chains, Annals of Applied Probability, 14(1) 2004, p. 274-325 [PDF]
Posted in Uncategorized | 6 Comments

How high can you count in limited space?


I recently saw several different checks, all from the same bank, each with a different dollar amount printed in a fixed-width font, similar to the shortened example below:

**PAY EXACTLY**twelve thousand thirty-four and 56/100****************

Counting asterisks confirmed that the amount fields on each check were all padded to exactly the same length… which made me wonder, what is the largest check that the bank could print, using English words for the dollar amount?

Actually, that’s not quite the question I had in mind. There are very short names for very large numbers, but it’s not very helpful to say that a bank can cut a check for, say, “one googol” dollars, when most smaller amounts would not fit in the same length of field.

So to make the problem less linguistic and more mathematical, let’s instead ask the following more precise question: what is the largest integer n(m) such that every dollar amount from zero to n(m) can be printed in words using at most m characters?

Converting integers into words

For short fields, such as m=67 in the case of the bank checks, brute force is sufficient; we just need a function to convert integers into words. This is a pretty common programming problem, with one Python implementation shown below, where integer_name(n) works for any |n|<10^{36}:

units = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven',
    'eight', 'nine', 'ten', 'eleven', 'twelve', 'thirteen', 'fourteen',
    'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen']
tens = ['zero', 'ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty',
    'seventy', 'eighty', 'ninety']
powers = ['zero', 'thousand', 'million', 'billion', 'trillion',
    'quadrillion', 'quintillion', 'sextillion', 'septillion', 'octillion',
    'nonillion', 'decillion']
hundred = 'hundred'
minus = 'minus'
comma = ','
and_ = ' and'
space = ' '
hyphen = '-'
empty = ''

def small_integer_name(n, use_and=False):
    s = empty
    if n >= 100:
        q, n = divmod(n, 100)
        s = units[q] + space + hundred + (
            (and_ if use_and else empty) + space if n > 0 else empty)
    if n >= 20:
        q, n = divmod(n, 10)
        s += tens[q] + (hyphen if n > 0 else empty)
    return (s + units[n] if n > 0 else s)

def integer_name(n, use_comma=False, use_and=False, power=0):
    if n < 0:
        return minus + space + integer_name(-n, use_comma, use_and)
    elif n == 0:
        return units[0]
    s = empty
    if n >= 1000:
        q, n = divmod(n, 1000)
        s = integer_name(q, use_comma, use_and, power + 1) + (
            (comma if use_comma else empty) + space if n > 0 else empty)
    return (s + small_integer_name(n, use_and) +
            (space + powers[power] if power > 0 else empty) if n > 0 else s)

There are a couple of things to note. First, although typical American style does not include commas separating thousands, or the word and between hundreds and tens (e.g., “one hundred and twenty-three”), it’s nice to have the option here, since it affects the calculation of n(m).

Also, “everything” is a variable, including the hyphen, comma, space, even the empty string. The addition operator is overloaded for string concatenation, but it’s easy to replace the string constants with, say, integer word counts, or syllable counts, or whatever, without changing the code. For example, we can automatically generate lousy haiku:

One hundred twenty-

one thousand seven hundred


Coming back to the bank checks, it turns out that n(67)=1113322, so that the bank can print a check for any amount up to $1,113,322.99. Which doesn’t seem terribly large– I wonder if they fall back to using numerals if the amount doesn’t fit using words?

Twitter scale

This isn’t a new problem. A couple of years ago, FiveThirtyEight.com’s Riddler posed a problem involving the Twitter account @CountVonCount, of Sesame Street fame, essentially asking for n(139)… and then again asking for n(279) just last month, in response to the recent increase in maximum length of a tweet from 140 to 280 characters (in each case, less one character to account for an exclamation point).

These field widths are large enough that a brute force approach is no longer feasible. It’s a nice problem to implement a function to compute n(m) efficiently and exactly, with results as shown in the figure below.

How high can you count (on the y-axis) spelling each number in words using at most the number of characters on the x-axis?

All of the source code is available at the usual location here.

Posted in Uncategorized | 2 Comments

Proofs and theorems without words


In the figure below, a regular hexagon with side length 12 is tiled with “lozenges,” sort of like triangular dominoes, each consisting of a pair of unit-length equilateral triangles joined at a common side.

Each lozenge is in one of three possible orientations. Although the orientations appear random, if we count carefully, we find that there are exactly the same number of lozenges in each of the three orientations. In fact, no matter how we tile the hexagon, it is always the case that the number of lozenges (144 in this case) of each “type” is fixed. Can you prove this?

Proof without words

Although the (proof of the) Pythagorean theorem seems to be the most common example of a proof without words— usually a picture or diagram that doesn’t require any words to explain– this problem, with its “proof” below, is probably my favorite.

One reason I like this problem, particularly as motivation for student discussion, is that it is somewhat controversial. Is the above “proof without words” really a satisfactory proof?

Theorem without words

Moving away from random tilings for the moment, the original motivation for this post wasn’t actually a proof without words, but a theorem without words. I recently learned an interesting result I had not seen before, involving no more than high school-level physics, that I thought could be presented as an animation without any accompanying explanation:

Unfortunately, I’m not sure this quite succeeds as a “theorem without words.” As with the tiling proof, there is more going on here than the above animation arguably conveys. In particular, one of the most interesting things about this problem– that the animation doesn’t really show– is that the shape (that is, the eccentricity) of the ellipse of apogees is invariant: it does not depend on the speed of the projectile, or gravity, or any relationship between the two.

More on random lozenge tilings

Finally, some source code: although there are plenty of papers and web sites with images of random tilings like those above, I wanted to make my own images, and working code is harder to find. See here for my Python implementation for generating a random lozenge tiling of a hexagon of a given size. For example, the following figure shows a random tiling of a hexagon with side length 64:

My implementation is quick and dirty in the sense that it simply iterates the Markov chain of single-step up-or-down moves in the corresponding family of non-intersecting lattice paths, essentially shuffling long enough for the resulting random tiling to be approximately uniform. (Note that the tilings in the above images are more random than they might appear, despite the “frozen” regions near the corners.) It would be interesting to extend this implementation to use coupling from the past, to generate an exactly uniform random tiling.


  1. David, G. and Tomei, C., The Problem of the Calissons, American Mathematical Monthly, 96(5) May 1989, p. 429-431 [JSTOR]
  2. Wilson, D., Mixing Times of Lozenge Tiling and Card Shuffling Markov Chains, Annals of Applied Probability, 14(1) 2004, p. 274–325 [PDF]
Posted in Uncategorized | 1 Comment

Hilbert halftone art


This post was motivated by a recent attempt to transform a photograph into a large digital print, in what I hoped would be a mathematically interesting way. The idea was pretty simple: convert the (originally color) image into a black-on-white line drawing– a white background, with a single, continuous, convoluted black curve, one pixel wide.

This isn’t a new idea. One example of how to do this is to convert the image into a solution of an instance of the traveling salesman problem, with more “cities” clustered in darker regions of the source image. But I wanted to do something slightly different, with more explicitly visible structure… which doesn’t necessarily translate to more visual appeal: draw a Hilbert space-filling curve, but vary the order of the curve (roughly, the depth of recursion) locally according to the gray level of the corresponding pixels of the source image.

After some experimenting, I settled on the transformation described in the figure below. Each pixel of the source image is “inflated” to an 8-by-8 block of pixels in the output, with a black pixel (lower left) represented by a second-order Hilbert curve, and a white pixel (upper left) by just a line segment directly connecting the endpoints of the block, with two additional gray levels in between, each connecting progressively more/fewer points along the curve.

Conversion of 2×2-pixel image, with each of four gray levels mapped to corresponding 8×8 block of (approximated) Hilbert curve.


The figure below shows an example of creating an image for input to the algorithm. There are two challenges to consider:

  • Size: The 8-fold inflation of each pixel means that if we want the final output to be 1024-by-1024, then the input image must be 128-by-128, as shown here. (For my project, I could afford a 512-by-512 input, with larger than single-pixel steps between points on the curve, yielding output suitable for a 36-inch print.)
  • Grayscale: The color in the input image must be quantized to just four gray levels, using your favorite photo editor. (I did this in Mathematica.)

Conversion of original color image to 128-by-128 image with 4 gray levels.

The figure below shows the resulting output. You may have to zoom in to see the details of the curve, especially in the black regions.

Black and white 1024-by-1024 image containing a single black Hilbert curve, one pixel in width.

Source code

Following is the Python source that does the transformation. It uses the Hilbert curve encoding module from my allRGB experiment, and I used Pygame for the image formatting so that I could watch the output as it was created.

import hilbert
import pygame

BLACK = (0, 0, 0, 255)
DARK_GRAY = (85, 85, 85, 255)
LIGHT_GRAY = (160, 160, 160, 255)
WHITE = (255, 255, 255, 255)

STEPS = {BLACK: [1] * 15,
         DARK_GRAY: [1, 1, 1, 1, 4, 1, 1, 5],
         LIGHT_GRAY: [4, 7, 4],
         WHITE: [15]}

class Halftone:
    def __init__(self, image, step):
        self.h = hilbert.Hilbert(2)
        self.index = -1
        self.pos = (0, 0)
        width, height = image.get_size()
        self.target = pygame.Surface((step * 4 * width, step * 4 * height))
        for pixel in range(width * height):
            self.move(1, step)
            for n in STEPS[tuple(image.get_at([w // 4 for w in self.pos]))]:
                self.move(n, step)

    def move(self, n, step):
        self.index = self.index + n
        next_pos = self.h.encode(self.index)
        pygame.draw.line(self.target, BLACK, [step * w for w in self.pos],
                                             [step * w for w in next_pos])
        self.pos = next_pos

if __name__ == '__main__':
    import sys
    step = int(sys.argv[1])
    for filename in sys.argv[2:]:
        pygame.image.save(Halftone(pygame.image.load(filename), step).target,
                          filename + '.ht.png')
Posted in Uncategorized | 2 Comments

Probability of playable racks in Scrabble


Earlier this year, I spent some time calculating the probability of a Scrabblebingo:” drawing a rack of 7 tiles and playing all of them in a single turn to spell a 7-letter word. The interesting part of the analysis was the use of the Google Books Ngrams data set to limit the dictionary of playable words by their frequency of occurrence, to account for novice players like me that might only know “easier” words in more common use. The result was that the probability of drawing a rack of 7 letters that can play a 7-letter word is about 0.132601… if we allow the entire official Scrabble dictionary, including words like zygoses (?). The probability is only about 0.07 if we assume that I only know about a third– the most frequently occurring third– of the 7-letter words in the dictionary.

But given just how poorly I play Scrabble, it seems optimistic to focus on a bingo. Instead of asking whether we can play a 7-letter word using the entire rack, let’s consider what turns out to be a much harder problem, namely, whether the rack is playable at all: that is,

Problem: What is the probability that a randomly drawn rack is playable, i.e., contains some subset of tiles, not necessarily all 7, that spell a word in the dictionary?

Minimal subracks

The first interesting aspect of this problem is the size of the dictionary. There are 187,632 words in the 2014 Official Tournament and Club Word List (the latest for which an electronic version is accessible)… but we can immediately discard nearly 70% of them, those with more than 7 letters, leaving 56,624 words with at most 7 letters. (There is exactly one such word, pizzazz, that I’ll leave in our list, despite the fact that it is one of 13 words in the dictionary with no hope of ever being played: it has too many z‘s!)

But we can trim the list further, first by noting that the order of letters in a word doesn’t matter– this helps a little but not much– and second by noting that some words may contain other shorter words as subsets. For example, the single most frequently occurring word, the, contains the shorter word he. So when evaluating a rack to see whether it is playable or not, we only need to check if we can play he; if we can’t, then we can’t play the, either.

In other words, we only need to consider minimal subracks: the minimal elements of the inclusion partial order on sets of tiles spelling words in the dictionary. This is a huge savings: instead of 56,624 words with at most 7 letters, we only need to consider the following 223 minimal “words,” or unordered subsets of tiles, with blank tiles indicated by an underscore:

__, _a, _b, _cv, _d, _e, _f, _g, _h, _i, _j, _k, _l, _m, _n, _o, _p, _q, _r, _s, _t, _u, _vv, _w, _x, _y, _z, aa, ab, aco, acv, ad, ae, af, ag, ah, ai, ak, al, am, an, aov, ap, aqu, ar, as, at, auv, avv, aw, ax, ay, az, bbu, bcu, bdu, be, bfu, bgu, bi, bklu, bllu, bo, brr, bru, by, cciiv, ccko, cdu, cee, cei, cekk, ceu, cffu, cfku, cgku, cgllyy, chly, chrtw, ciirr, ciy, cklu, cllu, cmw, coo, coz, cru, cry, cuz, ddu, de, dfu, dgu, di, djuy, dkuu, dlu, do, dru, dry, duw, eeg, eej, eek, eequu, eev, eez, ef, egg, egk, egv, eh, eiv, eju, eku, ekz, el, em, en, eo, ep, er, es, et, ew, ex, ey, ffpt, fgu, fi, flu, fly, fo, fru, fry, ghlly, gi, gju, glu, go, gpy, grr, gru, gsyyyz, guv, guy, hi, hm, hnt, ho, hpt, hpy, hs, hty, hu, hwy, iirvz, ijzz, ik, il, im, in, io, ip, iq, irry, is, it, ivy, iwz, ix, jjuu, jkuu, jo, kkoo, klru, kouz, kruu, kst, ksy, kuy, lllu, llxyy, lnxy, lo, lpy, lsy, luu, luv, mm, mo, mu, my, no, nsy, nu, nwy, ooz, op, or, os, ot, ow, ox, oy, ppty, pry, pst, psy, ptyy, pu, pxy, rty, ruy, rwy, sty, su, tu, uuyz, uwz, ux, uzz, zzz

The figure below expands on this calculation, showing the number of minimal subracks on the y-axis if we limit the dictionary of words we want to be able to play in various ways: either by frequency on the x-axis, where dumber is to the left; or by minimum word length, where the bottom curve labeled “>=2” is the entire dictionary, and the top curve labeled “>=7” requires a bingo.

Number of minimal playable subsets of tiles in inclusion order on the set of all playable words in OTCWL 2014, restricted by length and/or frequency.

As this figure shows, we benefit greatly– computationally, that is– from being able to play short words. At the other extreme, if we require a bingo, then effectively every word is a minimal subrack, and we actually suffer further inflation by including the various ways to spell each word with blanks.

Finding a subrack in a trie

At this point, the goal is to compute the probability that a randomly drawn rack of 7 tiles contains at least one of the minimal subracks described above. It turns out that there are only 3,199,724 distinct (but not equally likely) possible racks, so it should be feasible to simply enumerate them, testing each rack one by one, accumulating the frequency of those that are playable… as long as each test for containing a minimal subrack is reasonably fast. (Even if we just wanted to estimate the probability via Monte Carlo simulation by sampling randomly drawn racks, we still need the same fast test for playability.)

This problem is ready-made for a trie data structure, an ordered tree where each edge is labeled with a letter (or blank), and each leaf vertex represents a minimal playable subrack containing the sorted multiset of letters along the path to the leaf. (We don’t have to “mark” any non-leaf vertices; only leaves represent playable words, since we reduced the dictionary to the antichain of minimal subracks.)

The following Python code represents a trie of multisets as a nested dictionary with single-element keys (letters in this case), implementing the two operations needed for our purpose:

  • Inserting a “word” multiset given as a sorted list of elements (or a sorted string of characters in this case),
  • Testing whether a given word (i.e., a drawn rack of 7 tiles, also sorted) contains a word in the trie as a subset.

Insertion is straightforward; the recursive subset test is more interesting:

def trie_insert(t, word):
    for letter in word:
        if not letter in t:
            t[letter] = {}
        t = t[letter]

def trie_subset(t, word):
    if not t:
        return True
    if not word:
        return False
    return (word[0] in t and trie_subset(t[word[0]], word[1:]) or
            trie_subset(t, word[1:]))

Enumerating racks

Armed with a test for whether a given rack is playable, it remains to enumerate all possible racks, testing each one in turn. In more general terms, we have a multiset of 100 tiles of 27 types, and we want to enumerate all possible 7-subsets. The following Python code does this, representing a multiset (or subset) as a list of counts of each element type. This is of interest mainly because it’s my first occasion to use the yield from syntax:

def multisubsets(multiset, r, prefix=[]):
    if r == 0:
        yield prefix + [0] * len(multiset)
    elif len(multiset) > 0:
        for k in range(min(multiset[0], r) + 1):
            yield from multisubsets(multiset[1:], r - k, prefix + [k])

(This and the rest of the Scrabble-specific code are available at the usual location here.)

The final results are shown in the figure below, with convention for each curve similar to the previous figure.

Probability that a random rack of 7 tiles is playable, containing a word in OTCWL 2014 restricted by length and/or frequency.

For example, the bottom curve essentially reproduces the earlier bingo analysis of the probability of playing a word of at least (i.e., exactly) 7 letters from a randomly drawn rack. At the other extreme, if we have the entire dictionary at our disposal (with all those short words of >=2 letters), it’s really hard to draw an unplayable rack (probability 0.00788349). Even if we limit ourselves to just the 100 or so most frequently occurring words– many of which are still very short– we can still play 9 out of 10 racks.

In between, suppose that even though we might know many short words, ego prevents actually playing anything shorter than, say, 5 letters in the first turn. Now the probabilities start to depend more critically on whether we are an expert who knows most of the words in the official dictionary, or a novice who might only know a third or fewer of them.

Posted in Uncategorized | Leave a comment

Analysis of Bingo


Suppose that Alice and Bob play a game for a dollar: they roll a single six-sided die repeatedly, until either:

  1. Alice wins if they observe each of the six possible faces at least once, or
  2. Bob wins if they observe any one face six times.

Would you rather be Alice or Bob in this scenario? Or does it matter? You can play a similar game with a deck of playing cards: shuffle the deck, and deal one card at a time from the top of the deck, with Alice winning when all four suits are dealt, and Bob winning when any particular suit is dealt four times. (This version has the advantage of providing a built-in record of the history of deals, in case of argument over who actually wins.) Again, would you rather be Alice or Bob?

It turns out that Alice has a distinct advantage in both games, winning nearly three times more often than Bob in the dice version, and nearly twice as often in the card version. The objective of this post is to describe some interesting mathematics involved in these games, and relate them to the game of Bingo, where a similar phenomenon is described in a recent Math Horizons article (see reference below): the winning Bingo card among multiple players is much more likely to have a horizontal bingo (all numbers in some row) than vertical (all numbers in some column).

Bingo with a single card

First, let’s describe how Bingo works with just one player. A Bingo card is a 5-by-5 grid of numbers, with each column containing 5 numbers randomly selected without replacement from 15 possibilities: the first “B” column is randomly selected from the numbers 1-15, the second “I” column is selected from 16-30, the third column from 31-45, the fourth column from 46-60, and the fifth “O” column from 61-75. An example is shown below.

Example of an American Bingo card.

A “caller” randomly draws, without replacement, from a pool of balls numbered 1 through 75, calling each number in turn as it is drawn, with the player marking the called number if it is present on his or her card. The player wins by marking all 5 squares in any row, column, or diagonal. (One minor wrinkle in this setup is that, in typical American-style Bingo, the center square is “free,” considered to be already marked before any numbers are called.)

It will be useful to generalize this setup with parameters (n,m), where each card is n \times n with each column selected from m possible values, so that standard Bingo corresponds to (n,m)=(5,15).

We can compute the probability distribution of the number of draws required for a single card to win. Bill Butler describes one approach, enumerating the 2^{n^2-1}=2^{24} possible partially-marked cards and computing the probability of at least one bingo for each such marking.

Alternatively, we can use inclusion-exclusion to compute the cumulative distribution directly, by enumerating just the 2^{2n+2}=2^{12} possible combinations of horizontal, vertical, and diagonal bingos (of which there are 5, 5, and 2, respectively) on a card with k marks. In Mathematica:

bingoSets[n_, free_: True, diag_: True] :=
 Module[{card = Partition[Range[n^2], n]},
  If[free, card[[Ceiling[n/2], Ceiling[n/2]]] = 0];
    card, Transpose[card],
    If[diag, {Diagonal[card], Diagonal[Reverse[card]]}, {}]],
   0, Infinity]]

bingoCDF[k_, nm_, bingos_] :=
  1 - Total@Map[
      (j = Length[Union @@ #];
        (-1)^Length[#] Binomial[nm - j, k - j]) &,
      Subsets[bingos]]/Binomial[nm, k]]

bingos = bingoSets[5, False, True];
cdf = Table[bingoCDF[k, 75, bingos], {k, 1, 75}];

(Note the optional arguments specifying whether the center square is free, and whether diagonal bingo is allowed. It will be convenient shortly to consider a simplified version of the game, where these two “special” rules are discarded.)

The following figure shows the resulting distribution, with the expected number of 43.546 draws shown in red.

Cumulative probability distribution of a single-card bingo in at most the given number of draws. The average of 43.546 draws is shown in red.

Independence with 2 or more cards

Before getting to the “horizontal is more likely than vertical” phenomenon, it’s worth pointing out another non-intuitive aspect of Bingo. If instead of just a single card, we have a game with multiple players, possibly with thousands of different cards, what is the distribution of number of draws until someone wins?

If P_1(X \leq k) is the cumulative distribution for a single card as computed above, then since each of multiple cards is randomly– and independently– “generated,” intuitively it seems like the probability P_j(X \leq k) of at least one winning bingo among j cards in at most k draws should be given by

P_j(X \leq k) \stackrel{?}{=} 1-(1-P_1(X \leq k))^j

Butler uses exactly this approach. However, this is incorrect; although the values in the squares of multiple cards are independent, the presence or absence of winning bingos are not. Perhaps the best way to see this is to consider a “smaller” simplified version of the game, with (n,m)=(2,2), so that there are only four equally likely possible distinct cards:

The four possible cards in (2,2) Bingo.

Let’s further simplify the game so that only horizontal and vertical bingos are allowed, with no diagonals. Then the game must end after either two or three draws: with a single card, it ends in two draws with probability 2/3. However, with two cards, the probability of a bingo in two draws is 5/6, not 1-(1-2/3)^2=8/9.

Horizontal bingos with many cards

Finally, let’s come back to the initial problem: suppose that there are a large number of players, with so many cards in play that we are effectively guaranteed a winner as soon as either:

  1. At least one number from each of the n=5 column groups is drawn, resulting in a horizontal bingo on some card; or
  2. At least n=5 of m=15 possible numbers is drawn from any one particular column group, resulting in a vertical bingo on some card.

(Let’s ignore the free square and diagonal bingos for now; the former is easily handled but unnecessarily complicates the analysis, while the latter would mean that (1) and (2) are not mutually exclusive.)

Then the interesting observation is that a horizontal bingo (1) is over three times more likely to occur than a vertical bingo (2). Furthermore, this setup– Bingo with a large number of cards– is effectively the same as the card and dice games described in the introduction: Bingo is (n,m)=(5,15), the card game is (4,13), and the dice version is effectively (6,\infty).

The Math Horizons article referenced below describes an approach to calculating these probabilities, which involves enumerating integer partitions. However, this problem is ready-made for generating functions, which takes care of the partition house-keeping for us: let’s define

g_a(x) = \left(\sum_{j=a}^{n-1} {m \choose j}x^j\right)^{n-1}

so that, for example, for Bingo with no free square,

g_1(x) = \left({15 \choose 1}x^1 + {15 \choose 2}x^2 + {15 \choose 3}x^3 + {15 \choose 4}x^4\right)^4

Intuitively, each factor corresponds to a column, where each coefficient of x^j indicates the number of ways to draw exactly j numbers from that column (with some minimum number from each column specified by a). The overall coefficient of x^k indicates the number of ways to draw k numbers in total, with neither a horizontal nor vertical bingo.

Then using the notation from the article, the probability P(H_k) of a horizontal bingo on exactly the k-th draw is

P(H_k) = \frac{mn(k-1)!(mn-k)!}{(mn)!}[x^{k-1}]g_1(x)

and the probability P(V_k) of a vertical bingo on exactly the k-th draw is

P(V_k) = \frac{mn(k-1)!(mn-k)!}{(mn)!} {m-1 \choose n-1} [x^{k-n}](g_0(x)-g_1(x))

The summation

\sum_{k=n}^{(n-1)^2+1} P(H_k)

over all possible numbers of draws yields the overall probability of about 0.752 that a horizontal bingo is observed before a vertical one. Similarly, for the card game with (n,m)=(4,13), the probability that Alice wins is 22543417/34165005, or about 0.66. For the dice game– which requires a slight modification to the above formulation, left as an exercise for the reader– Alice wins with probability about 0.747.


  1. Benjamin, A., Kisenwether, J., and Weiss, B., The Bingo Paradox, Math Horizons25(1) September 2017, p. 18-21 [PDF]
Posted in Uncategorized | 2 Comments