Picking a perfect NCAA bracket: 2018 was the most unlikely tournament so far

Introduction

Every year, there are upsets and wild outcomes during the NCAA men’s basketball tournament. But this year felt, well, wilder. For example, for the first time in 136 games over the 34 years of the tournament’s current 64-team format, a #16 seed (UMBC) beat a #1 seed (Virginia) in the first round. (I refuse to acknowledge the abomination of the 4 “play in” games in the zero-th round.) And I am a Kansas State fan, who watched my #9 seed Wildcats beat Kentucky, a team that went to the Final Four in 4 of the last 8 years.

So I wondered whether this was indeed the “wildest” tournament ever… and it turns out that it was, by several reasonable metrics.

Modeling game probabilities

To compare the tournaments in different years, we assume that the probability of outcome of any particular game depends only on the seeds of the opposing teams. Schwertman et. al. (see reference below) suggest a reasonable model of the form

p_{i,j} = 1-p_{j,i} = \frac{1}{2}+k(s_i-s_j)

where s_i is some measure of the “strength” of seed i (ranging from 1 to 16), and the scale factor k calibrates the range of resulting probabilities, selected here so that the most extreme value p_{16,1}=1/136 matches the current maximum likelihood estimate based on the 136 observations over the past 34 years.

One simple strength function is the linear s_i=-i, although this would suggest, for example, that #1 vs. #5 and #11 vs. #15 are essentially identical match-ups. A better fit is

s_i = \Phi^{-1}(1-\frac{4i}{n})

where \Phi is the quantile of the normal distribution, and n=351 is the number of teams in all of Division I. The idea is that team strength is normally distributed, and the tournament invites the 64 teams in the upper tail of the distribution, as shown in the figure below.

Normally-distributed strength s_i for each seed.

Probability of a perfect bracket

Armed with these candidate models, I looked at all of the tournaments since 1985, the first year of the current 64-team format. I have provided summary data sets before (a search of this blog for “NCAA” will yield several posts on this subject), but this analysis required more raw data, all of which is now available at the usual location here.

For each year of the tournament, we can ask what is the probability of picking a perfect bracket in that year, correctly identifying the winners of all 63 games? Actually, there are three reasonable variants of this question:

  1. If we flip a coin to pick each game, what is the probability of picking every game correctly?
  2. If we pick a “chalk” bracket, always picking the favored higher-seeded (i.e., lower-numbered) team to win each game, what is the probability of picking every game correctly?
  3. If we managed to pick the perfect bracket for a given year, what is the prior probability of that particular outcome?

The answer to the first question is the 1 in 2^{63}, or the “1 in 9.2 quintillion” that appears in popular press. And this is always exactly correct, no matter how individual teams actually match up in any given year, as long as we are flipping a coin to guess the outcome of each game. But this isn’t very realistic, since seed match-ups do matter; a #1 seed will beat a #16 seed… well, almost all of the time.

So the second question is more interesting, but also more complicated, since it does depend on our model of how different seeds match up… but it doesn’t depend on which year of the tournament we’re talking about, at least as long as we always use the same model. Using the strength models described above, a chalk bracket has a probability of around 1 in 100 billion of being correct (1 in 186 billion for the linear strength model, or 1 in 90 billion for the normal strength model).

The third question is the motivation for this post: the probability of a given year’s actual outcome will generally lie somewhere between the other two “extremes.” How has this probability varied over the years, and was 2018 really an outlier? The results are shown in the figure below.

Probability of a perfect bracket, 1985-2018.

The constant black line at the bottom is the 1 in 9.2 quintillion coin flip. The constant red and blue lines at the top are the probabilities of a chalk bracket, assuming the linear or normal strength models, respectively.

And in between are the actual outcomes of each tournament. (Aside: I tried a bar chart for this, but I think the line plot more clearly shows the comparison of the two models, as well as both the maximum and minimum behavior that we’re interested in here.) This year’s 2018 tournament was indeed the most unlikely, so to speak, although it has close competition, all in this decade. At the other extreme, 2007 was the most likely bracket.

Reference:

  1. Schwertman, N., McCready, T., and Howard, L., Probability Models for the NCAA Regional Basketball Tournaments, The American Statistician45(1) February 1991, p. 35-38 [JSTOR]
Posted in Uncategorized | 2 Comments

Whodunit logic puzzle

You are a detective investigating a robbery, with five suspects having made the following statements:

  • Paul says, “Neither Steve nor Ted was in on it.”
  • Quinn says, “Ray wasn’t in on it, but Paul was.”
  • Ray says, “If Ted was in on it, then so was Steve.”
  • Steve says, “Paul wasn’t in on it, but Quinn was.”
  • Ted says, “Quinn wasn’t in on it, but Paul was.”

You do not know which, nor even how many, of the five suspects were involved in the crime. However, you do know that every guilty suspect is lying, and every innocent suspect is telling the truth. Which suspect or suspects committed the crime?

I think puzzles similar to this one make good, fun homework problems in a discrete mathematics course introducing propositional logic. However, this particular puzzle is a bit more complex than the usual “Which one of three suspects is guilty?” type, not just because there are more suspects, but also because we don’t know how many suspects are guilty.

That added complexity is motivated by trying to transform this typically pencil-and-paper mathematical logic problem into a potentially nice computer science programming exercise: consider writing a program to automate solving this problem… or even better, writing a program to generate new random instances of problems like this one, while ensuring that the resulting puzzle has some reasonably “nice” properties. For example, the solution should be unique; but the puzzle should also be “interesting,” in that we should need all of the suspects’ statements to deduce who is guilty (that is, any proper subset of the statements should imply at least two distinct possible solutions).

Posted in Uncategorized | 2 Comments

Coupling from the past

Introduction

This is a follow-up to one of last month’s posts that contained some images of randomly-generated “lozenge tilings.” The focus here is not on the tilings themselves, but on the perfectly random sampling technique due to Propp and Wilson known as “coupling from the past” (see references below) that was used to generate them. As is often the case here, I don’t have any new mathematics to contribute; the objective of this post is just to go beyond the pseudo-code descriptions of the technique in most of the literature, and provide working Python code with a couple of different example applications.

The basic problem is this: suppose that we want to sample from some probability distribution \pi over a finite discrete space S, and that we have an ergodic Markov chain whose steady state distribution is \pi. An approximate sampling procedure is to pick an arbitrary initial state, then just run the chain for some large number of iterations, with the final state as your sample. This idea has been discussed here before, in the context of card shuffling, the board game Monopoly, as well as the lozenge tilings from last month.

For example, consider shuffling a deck of cards using the following iterative procedure: select a random adjacent pair of cards, and flip a coin to decide whether to put them in ascending or descending order (assuming any convenient total ordering, e.g. first by rank, then by suit alphabetically). Repeat for some “large” number of iterations.

There are two problems with this approach:

  1. It’s approximate; the longer we iterate the chain, the closer we get to the steady state distribution… but (usually) no matter when we stop, the distribution of the resulting state is never exactly \pi.
  2. How many iterations are enough? For many Markov chains, the mixing time may be difficult to analyze or even estimate.

Coupling from the past

Coupling from the past can be used to address both of these problems. For a surprisingly large class of applications, it is possible to sample exactly from the stationary distribution of a chain, by iterating multiple realizations of the chain until they are “coupled,” without needing to know ahead of time when to stop.

First, suppose that we can express the random state transition behavior of the chain using a fixed, deterministic function f:S \times [0,1) \to S, so that for a random variable U uniformly distributed on the unit interval,

\forall i,j\ P(f(s_i, U)=s_j) = p_{i,j}

In other words, given a current state X_t and random draw U_t, the next state is X_{t+1}=f(X_t, U_t).

Now consider a single infinite sequence of random draws (u_1, u_2, u_3, \ldots). We will use this same single source of randomness to iterate multiple realizations of the chain, one for each of the |S| possible initial states… but starting in the past, and running forward to time zero. More precisely, let’s focus on two particular chains with different initial states s_i and s_j at time -n, so that

X_{-n}=s_i, Y_{-n}=s_j

X_{-(n-1)}=f(X_{-n}, u_n), Y_{-(n-1)}=f(Y_{-n}, u_n)

\cdots

X_{-1}=f(X_{-2}, u_2), Y_{-1}=f(Y_{-2}, u_2)

X_0=f(X_{-1}, u_1), Y_0=f(Y_{-1}, u_1)

(Notice how the more random draws we generate, the farther back in time they are used. More on this shortly.)

A key observation is that if X_t=Y_t at any point, then the chains are “coupled:” since they experience the same sequence of randomness influencing their behavior, they will continue to move in lockstep for all subsequent iterations as well, up to X_0=Y_0. Furthermore, if the states at time zero are all the same for all possible initial states run forward from time -n, then the distribution of that final state X_0 is the stationary distribution of the chain.

But what if all initial states don’t end at the same final state? This is where the single source of randomness is key: we can simply look farther back in the past, say 2n time steps instead of n, by extending our sequence of random draws… as long as we re-use the existing random draws to make the same “updates” to the states at later times (i.e., closer to the end time zero).

Monotone coupling

So all we need to do to perfectly sample from the stationary distribution is

  1. Choose a sufficiently large n to look far enough into the past.
  2. Run |S| realizations of the chain, one for each initial state, all starting at time -n and running up to time zero.
  3. As long as the final state X_0 is the same for all initial states, output X_0 as the sample. Otherwise, look farther back in the past (n \leftarrow 2n), generate additional random draws accordingly, and go back to step 2.

This doesn’t seem very helpful, since step 2 requires enumerating elements of the typically enormous state space. Fortunately, in many cases, including both lozenge tiling and card shuffling, it is possible to simplify the procedure by imposing a partial order \preceq on the state space– with minimum and maximum elements– that is preserved by the update function f. That is, suppose that for all states s_i,s_j \in S and random draws u, if s_i \preceq s_j, then f(s_i, u) \preceq f(s_j,u).

Then instead of running the chain for all possible initial states, we can just run two chains, starting with the minimum and maximum elements in the partial order. If those two chains are coupled by time zero, then they will also “squeeze” any other initial state into the same coupling.

Following is the resulting Python implementation:

import random

def monotone_cftp(mc):
    """Return stationary distribution sample using monotone CFTP."""
    updates = [mc.random_update()]
    while True:
        s0, s1 = mc.min_max()
        for u in updates:
            s0.update(u)
            s1.update(u)
        if s0 == s1:
            break
        updates = [mc.random_update()
                   for k in range(len(updates) + 1)] + updates
    return s0

Sort of like the Fisher-Yates shuffle, this is one of those algorithms that is potentially easy to implement incorrectly. In particular, note that the list of “updates,” which is traversed in order during iteration of the two chains, is prepended with additional blocks of random draws as needed to start farther back in the past. The figure below shows the resulting behavior, mapping each output of the random number generator to the time at which it is used to update the chains.

Random draws in the order they are generated, vs. the order in which they are used in state transitions.

Coming back once again to the card shuffling example, the code below uses coupling from the past with random adjacent transpositions to generate an exactly uniform random permutation p. (A similar example generating random lozenge tilings may be found at the usual location here.)

class Shuffle:
    def __init__(self, n, descending=False):
        self.p = list(range(n))
        if descending:
            self.p.reverse()

    def min_max(self):
        n = len(self.p)
        return Shuffle(n, False), Shuffle(n, True)

    def __eq__(self, other):
        return self.p == other.p

    def random_update(self):
        return (random.randint(0, len(self.p) - 2),
                random.randint(0, 1))

    def update(self, u):
        k, c = u
        if (c == 1) != (self.p[k] < self.p[k + 1]):
            self.p[k], self.p[k + 1] = self.p[k + 1], self.p[k]

print(monotone_cftp(Shuffle(52)).p)

The minimum and maximum elements of the partial order are the identity and reversal permutations, as might be expected; but it’s an interesting puzzle to determine the actual partial order that these random adjacent transpositions preserve. Consider, for example, a similar process where we select a random adjacent pair of cards, then flip a coin to decide whether to swap them or not (vs. whether to put them in ascending or descending order). This process has the same uniform stationary distribution, but won’t work when applied to monotone coupling from the past.

Re-generating vs. storing random draws

One final note: although the above implementation is relatively easy to read, it may be prohibitively expensive to store all of the random draws as they are generated. The slightly more complex implementation below is identical in output behavior, but is more space-efficient by only storing “markers” in the random stream, re-generating the draws themselves as needed when looking farther back into the past.

import random

def monotone_cftp(mc):
    """Return stationary distribution sample using monotone CFTP."""
    updates = [(random.getstate(), 1)]
    while True:
        s0, s1 = mc.min_max()
        rng_next = None
        for rng, steps in updates:
            random.setstate(rng)
            for t in range(steps):
                u = mc.random_update()
                s0.update(u)
                s1.update(u)
            if rng_next is None:
                rng_next = random.getstate()
        if s0 == s1:
            break
        updates.insert(0, (rng_next, 2 ** len(updates)))
    random.setstate(rng_next)
    return s0

 

References:

  1. Propp, J. and Wilson, D., Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics, Random Structures and Algorithms, 9 1996, p. 223-252 [PDF]
  2. Wilson, D., Mixing times of lozenge tiling and card shuffling Markov chains, Annals of Applied Probability, 14(1) 2004, p. 274-325 [PDF]
Posted in Uncategorized | 6 Comments

How high can you count in limited space?

Introduction

I recently saw several different checks, all from the same bank, each with a different dollar amount printed in a fixed-width font, similar to the shortened example below:

**PAY EXACTLY**twelve thousand thirty-four and 56/100****************

Counting asterisks confirmed that the amount fields on each check were all padded to exactly the same length… which made me wonder, what is the largest check that the bank could print, using English words for the dollar amount?

Actually, that’s not quite the question I had in mind. There are very short names for very large numbers, but it’s not very helpful to say that a bank can cut a check for, say, “one googol” dollars, when most smaller amounts would not fit in the same length of field.

So to make the problem less linguistic and more mathematical, let’s instead ask the following more precise question: what is the largest integer n(m) such that every dollar amount from zero to n(m) can be printed in words using at most m characters?

Converting integers into words

For short fields, such as m=67 in the case of the bank checks, brute force is sufficient; we just need a function to convert integers into words. This is a pretty common programming problem, with one Python implementation shown below, where integer_name(n) works for any |n|<10^{36}:

units = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven',
    'eight', 'nine', 'ten', 'eleven', 'twelve', 'thirteen', 'fourteen',
    'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen']
tens = ['zero', 'ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty',
    'seventy', 'eighty', 'ninety']
powers = ['zero', 'thousand', 'million', 'billion', 'trillion',
    'quadrillion', 'quintillion', 'sextillion', 'septillion', 'octillion',
    'nonillion', 'decillion']
hundred = 'hundred'
minus = 'minus'
comma = ','
and_ = ' and'
space = ' '
hyphen = '-'
empty = ''

def small_integer_name(n, use_and=False):
    s = empty
    if n >= 100:
        q, n = divmod(n, 100)
        s = units[q] + space + hundred + (
            (and_ if use_and else empty) + space if n > 0 else empty)
    if n >= 20:
        q, n = divmod(n, 10)
        s += tens[q] + (hyphen if n > 0 else empty)
    return (s + units[n] if n > 0 else s)

def integer_name(n, use_comma=False, use_and=False, power=0):
    if n < 0:
        return minus + space + integer_name(-n, use_comma, use_and)
    elif n == 0:
        return units[0]
    s = empty
    if n >= 1000:
        q, n = divmod(n, 1000)
        s = integer_name(q, use_comma, use_and, power + 1) + (
            (comma if use_comma else empty) + space if n > 0 else empty)
    return (s + small_integer_name(n, use_and) +
            (space + powers[power] if power > 0 else empty) if n > 0 else s)

There are a couple of things to note. First, although typical American style does not include commas separating thousands, or the word and between hundreds and tens (e.g., “one hundred and twenty-three”), it’s nice to have the option here, since it affects the calculation of n(m).

Also, “everything” is a variable, including the hyphen, comma, space, even the empty string. The addition operator is overloaded for string concatenation, but it’s easy to replace the string constants with, say, integer word counts, or syllable counts, or whatever, without changing the code. For example, we can automatically generate lousy haiku:

One hundred twenty-

one thousand seven hundred

seventy-seven

Coming back to the bank checks, it turns out that n(67)=1113322, so that the bank can print a check for any amount up to $1,113,322.99. Which doesn’t seem terribly large– I wonder if they fall back to using numerals if the amount doesn’t fit using words?

Twitter scale

This isn’t a new problem. A couple of years ago, FiveThirtyEight.com’s Riddler posed a problem involving the Twitter account @CountVonCount, of Sesame Street fame, essentially asking for n(139)… and then again asking for n(279) just last month, in response to the recent increase in maximum length of a tweet from 140 to 280 characters (in each case, less one character to account for an exclamation point).

These field widths are large enough that a brute force approach is no longer feasible. It’s a nice problem to implement a function to compute n(m) efficiently and exactly, with results as shown in the figure below.

How high can you count (on the y-axis) spelling each number in words using at most the number of characters on the x-axis?

All of the source code is available at the usual location here.

Posted in Uncategorized | 2 Comments

Proofs and theorems without words

Introduction

In the figure below, a regular hexagon with side length 12 is tiled with “lozenges,” sort of like triangular dominoes, each consisting of a pair of unit-length equilateral triangles joined at a common side.

Each lozenge is in one of three possible orientations. Although the orientations appear random, if we count carefully, we find that there are exactly the same number of lozenges in each of the three orientations. In fact, no matter how we tile the hexagon, it is always the case that the number of lozenges (144 in this case) of each “type” is fixed. Can you prove this?

Proof without words

Although the (proof of the) Pythagorean theorem seems to be the most common example of a proof without words— usually a picture or diagram that doesn’t require any words to explain– this problem, with its “proof” below, is probably my favorite.

One reason I like this problem, particularly as motivation for student discussion, is that it is somewhat controversial. Is the above “proof without words” really a satisfactory proof?

Theorem without words

Moving away from random tilings for the moment, the original motivation for this post wasn’t actually a proof without words, but a theorem without words. I recently learned an interesting result I had not seen before, involving no more than high school-level physics, that I thought could be presented as an animation without any accompanying explanation:

Unfortunately, I’m not sure this quite succeeds as a “theorem without words.” As with the tiling proof, there is more going on here than the above animation arguably conveys. In particular, one of the most interesting things about this problem– that the animation doesn’t really show– is that the shape (that is, the eccentricity) of the ellipse of apogees is invariant: it does not depend on the speed of the projectile, or gravity, or any relationship between the two.

More on random lozenge tilings

Finally, some source code: although there are plenty of papers and web sites with images of random tilings like those above, I wanted to make my own images, and working code is harder to find. See here for my Python implementation for generating a random lozenge tiling of a hexagon of a given size. For example, the following figure shows a random tiling of a hexagon with side length 64:

My implementation is quick and dirty in the sense that it simply iterates the Markov chain of single-step up-or-down moves in the corresponding family of non-intersecting lattice paths, essentially shuffling long enough for the resulting random tiling to be approximately uniform. (Note that the tilings in the above images are more random than they might appear, despite the “frozen” regions near the corners.) It would be interesting to extend this implementation to use coupling from the past, to generate an exactly uniform random tiling.

References:

  1. David, G. and Tomei, C., The Problem of the Calissons, American Mathematical Monthly, 96(5) May 1989, p. 429-431 [JSTOR]
  2. Wilson, D., Mixing Times of Lozenge Tiling and Card Shuffling Markov Chains, Annals of Applied Probability, 14(1) 2004, p. 274–325 [PDF]
Posted in Uncategorized | 1 Comment

Hilbert halftone art

Introduction

This post was motivated by a recent attempt to transform a photograph into a large digital print, in what I hoped would be a mathematically interesting way. The idea was pretty simple: convert the (originally color) image into a black-on-white line drawing– a white background, with a single, continuous, convoluted black curve, one pixel wide.

This isn’t a new idea. One example of how to do this is to convert the image into a solution of an instance of the traveling salesman problem, with more “cities” clustered in darker regions of the source image. But I wanted to do something slightly different, with more explicitly visible structure… which doesn’t necessarily translate to more visual appeal: draw a Hilbert space-filling curve, but vary the order of the curve (roughly, the depth of recursion) locally according to the gray level of the corresponding pixels of the source image.

After some experimenting, I settled on the transformation described in the figure below. Each pixel of the source image is “inflated” to an 8-by-8 block of pixels in the output, with a black pixel (lower left) represented by a second-order Hilbert curve, and a white pixel (upper left) by just a line segment directly connecting the endpoints of the block, with two additional gray levels in between, each connecting progressively more/fewer points along the curve.

Conversion of 2×2-pixel image, with each of four gray levels mapped to corresponding 8×8 block of (approximated) Hilbert curve.

Example

The figure below shows an example of creating an image for input to the algorithm. There are two challenges to consider:

  • Size: The 8-fold inflation of each pixel means that if we want the final output to be 1024-by-1024, then the input image must be 128-by-128, as shown here. (For my project, I could afford a 512-by-512 input, with larger than single-pixel steps between points on the curve, yielding output suitable for a 36-inch print.)
  • Grayscale: The color in the input image must be quantized to just four gray levels, using your favorite photo editor. (I did this in Mathematica.)

Conversion of original color image to 128-by-128 image with 4 gray levels.

The figure below shows the resulting output. You may have to zoom in to see the details of the curve, especially in the black regions.

Black and white 1024-by-1024 image containing a single black Hilbert curve, one pixel in width.

Source code

Following is the Python source that does the transformation. It uses the Hilbert curve encoding module from my allRGB experiment, and I used Pygame for the image formatting so that I could watch the output as it was created.

import hilbert
import pygame

BLACK = (0, 0, 0, 255)
DARK_GRAY = (85, 85, 85, 255)
LIGHT_GRAY = (160, 160, 160, 255)
WHITE = (255, 255, 255, 255)

STEPS = {BLACK: [1] * 15,
         DARK_GRAY: [1, 1, 1, 1, 4, 1, 1, 5],
         LIGHT_GRAY: [4, 7, 4],
         WHITE: [15]}

class Halftone:
    def __init__(self, image, step):
        self.h = hilbert.Hilbert(2)
        self.index = -1
        self.pos = (0, 0)
        width, height = image.get_size()
        self.target = pygame.Surface((step * 4 * width, step * 4 * height))
        self.target.fill(WHITE)
        for pixel in range(width * height):
            self.move(1, step)
            for n in STEPS[tuple(image.get_at([w // 4 for w in self.pos]))]:
                self.move(n, step)

    def move(self, n, step):
        self.index = self.index + n
        next_pos = self.h.encode(self.index)
        pygame.draw.line(self.target, BLACK, [step * w for w in self.pos],
                                             [step * w for w in next_pos])
        self.pos = next_pos

if __name__ == '__main__':
    import sys
    step = int(sys.argv[1])
    for filename in sys.argv[2:]:
        pygame.image.save(Halftone(pygame.image.load(filename), step).target,
                          filename + '.ht.png')
Posted in Uncategorized | 2 Comments

Probability of playable racks in Scrabble

Introduction

Earlier this year, I spent some time calculating the probability of a Scrabblebingo:” drawing a rack of 7 tiles and playing all of them in a single turn to spell a 7-letter word. The interesting part of the analysis was the use of the Google Books Ngrams data set to limit the dictionary of playable words by their frequency of occurrence, to account for novice players like me that might only know “easier” words in more common use. The result was that the probability of drawing a rack of 7 letters that can play a 7-letter word is about 0.132601… if we allow the entire official Scrabble dictionary, including words like zygoses (?). The probability is only about 0.07 if we assume that I only know about a third– the most frequently occurring third– of the 7-letter words in the dictionary.

But given just how poorly I play Scrabble, it seems optimistic to focus on a bingo. Instead of asking whether we can play a 7-letter word using the entire rack, let’s consider what turns out to be a much harder problem, namely, whether the rack is playable at all: that is,

Problem: What is the probability that a randomly drawn rack is playable, i.e., contains some subset of tiles, not necessarily all 7, that spell a word in the dictionary?

Minimal subracks

The first interesting aspect of this problem is the size of the dictionary. There are 187,632 words in the 2014 Official Tournament and Club Word List (the latest for which an electronic version is accessible)… but we can immediately discard nearly 70% of them, those with more than 7 letters, leaving 56,624 words with at most 7 letters. (There is exactly one such word, pizzazz, that I’ll leave in our list, despite the fact that it is one of 13 words in the dictionary with no hope of ever being played: it has too many z‘s!)

But we can trim the list further, first by noting that the order of letters in a word doesn’t matter– this helps a little but not much– and second by noting that some words may contain other shorter words as subsets. For example, the single most frequently occurring word, the, contains the shorter word he. So when evaluating a rack to see whether it is playable or not, we only need to check if we can play he; if we can’t, then we can’t play the, either.

In other words, we only need to consider minimal subracks: the minimal elements of the inclusion partial order on sets of tiles spelling words in the dictionary. This is a huge savings: instead of 56,624 words with at most 7 letters, we only need to consider the following 223 minimal “words,” or unordered subsets of tiles, with blank tiles indicated by an underscore:

__, _a, _b, _cv, _d, _e, _f, _g, _h, _i, _j, _k, _l, _m, _n, _o, _p, _q, _r, _s, _t, _u, _vv, _w, _x, _y, _z, aa, ab, aco, acv, ad, ae, af, ag, ah, ai, ak, al, am, an, aov, ap, aqu, ar, as, at, auv, avv, aw, ax, ay, az, bbu, bcu, bdu, be, bfu, bgu, bi, bklu, bllu, bo, brr, bru, by, cciiv, ccko, cdu, cee, cei, cekk, ceu, cffu, cfku, cgku, cgllyy, chly, chrtw, ciirr, ciy, cklu, cllu, cmw, coo, coz, cru, cry, cuz, ddu, de, dfu, dgu, di, djuy, dkuu, dlu, do, dru, dry, duw, eeg, eej, eek, eequu, eev, eez, ef, egg, egk, egv, eh, eiv, eju, eku, ekz, el, em, en, eo, ep, er, es, et, ew, ex, ey, ffpt, fgu, fi, flu, fly, fo, fru, fry, ghlly, gi, gju, glu, go, gpy, grr, gru, gsyyyz, guv, guy, hi, hm, hnt, ho, hpt, hpy, hs, hty, hu, hwy, iirvz, ijzz, ik, il, im, in, io, ip, iq, irry, is, it, ivy, iwz, ix, jjuu, jkuu, jo, kkoo, klru, kouz, kruu, kst, ksy, kuy, lllu, llxyy, lnxy, lo, lpy, lsy, luu, luv, mm, mo, mu, my, no, nsy, nu, nwy, ooz, op, or, os, ot, ow, ox, oy, ppty, pry, pst, psy, ptyy, pu, pxy, rty, ruy, rwy, sty, su, tu, uuyz, uwz, ux, uzz, zzz

The figure below expands on this calculation, showing the number of minimal subracks on the y-axis if we limit the dictionary of words we want to be able to play in various ways: either by frequency on the x-axis, where dumber is to the left; or by minimum word length, where the bottom curve labeled “>=2” is the entire dictionary, and the top curve labeled “>=7” requires a bingo.

Number of minimal playable subsets of tiles in inclusion order on the set of all playable words in OTCWL 2014, restricted by length and/or frequency.

As this figure shows, we benefit greatly– computationally, that is– from being able to play short words. At the other extreme, if we require a bingo, then effectively every word is a minimal subrack, and we actually suffer further inflation by including the various ways to spell each word with blanks.

Finding a subrack in a trie

At this point, the goal is to compute the probability that a randomly drawn rack of 7 tiles contains at least one of the minimal subracks described above. It turns out that there are only 3,199,724 distinct (but not equally likely) possible racks, so it should be feasible to simply enumerate them, testing each rack one by one, accumulating the frequency of those that are playable… as long as each test for containing a minimal subrack is reasonably fast. (Even if we just wanted to estimate the probability via Monte Carlo simulation by sampling randomly drawn racks, we still need the same fast test for playability.)

This problem is ready-made for a trie data structure, an ordered tree where each edge is labeled with a letter (or blank), and each leaf vertex represents a minimal playable subrack containing the sorted multiset of letters along the path to the leaf. (We don’t have to “mark” any non-leaf vertices; only leaves represent playable words, since we reduced the dictionary to the antichain of minimal subracks.)

The following Python code represents a trie of multisets as a nested dictionary with single-element keys (letters in this case), implementing the two operations needed for our purpose:

  • Inserting a “word” multiset given as a sorted list of elements (or a sorted string of characters in this case),
  • Testing whether a given word (i.e., a drawn rack of 7 tiles, also sorted) contains a word in the trie as a subset.

Insertion is straightforward; the recursive subset test is more interesting:

def trie_insert(t, word):
    for letter in word:
        if not letter in t:
            t[letter] = {}
        t = t[letter]

def trie_subset(t, word):
    if not t:
        return True
    if not word:
        return False
    return (word[0] in t and trie_subset(t[word[0]], word[1:]) or
            trie_subset(t, word[1:]))

Enumerating racks

Armed with a test for whether a given rack is playable, it remains to enumerate all possible racks, testing each one in turn. In more general terms, we have a multiset of 100 tiles of 27 types, and we want to enumerate all possible 7-subsets. The following Python code does this, representing a multiset (or subset) as a list of counts of each element type. This is of interest mainly because it’s my first occasion to use the yield from syntax:

def multisubsets(multiset, r, prefix=[]):
    if r == 0:
        yield prefix + [0] * len(multiset)
    elif len(multiset) > 0:
        for k in range(min(multiset[0], r) + 1):
            yield from multisubsets(multiset[1:], r - k, prefix + [k])

(This and the rest of the Scrabble-specific code are available at the usual location here.)

The final results are shown in the figure below, with convention for each curve similar to the previous figure.

Probability that a random rack of 7 tiles is playable, containing a word in OTCWL 2014 restricted by length and/or frequency.

For example, the bottom curve essentially reproduces the earlier bingo analysis of the probability of playing a word of at least (i.e., exactly) 7 letters from a randomly drawn rack. At the other extreme, if we have the entire dictionary at our disposal (with all those short words of >=2 letters), it’s really hard to draw an unplayable rack (probability 0.00788349). Even if we limit ourselves to just the 100 or so most frequently occurring words– many of which are still very short– we can still play 9 out of 10 racks.

In between, suppose that even though we might know many short words, ego prevents actually playing anything shorter than, say, 5 letters in the first turn. Now the probabilities start to depend more critically on whether we are an expert who knows most of the words in the official dictionary, or a novice who might only know a third or fewer of them.

Posted in Uncategorized | Leave a comment