“Doctors say he’s got a 50/50 chance of living… though there’s only a 10% chance of that.”

I’ve lately had occasion to contemplate my own mortality. How long should I expect to live? The most recent life table published by the Centers for Disease Control (see the reference at the end of this post) indicates an expected lifespan of 76.5 years for a male. This is based on a model of age at death as a random variable with the probability density shown in the following figure.

The expected lifespan of 76.5 years is (using the red curve for males). In other words, if we observed a large number of hypothetical (male) infants born in the reference period 2014– and they continued to experience 2014 mortality rates throughout their lifetimes– then their ages at death would follow the above distribution, with an average of 76.5 years.

However, I have more information now: I have already survived roughly four decades of life. So it makes sense to ask, what is my *conditional* expected age at death, given that I have already survived to, say, age 40? In other words, what is ?

This value is 78.8 years; I can expect to live to a greater age now than I thought I would when I was first born. The following figure shows this conditional expected age at death , as well as the corresponding expected *additional* lifespan , as a function of current age .

For another example, suppose that I survive to age 70. Instead of expecting just another 6.5 years, my expected additional lifespan has jumped to 14.5 years.

Which brings us to the interesting observation motivating this post: suppose instead that I *die* at age 70. I will have missed out on an additional 14.5 years of life on average, compared to the rest of the septuagenarians around me. Put another way, at the moment of my death, I perceive that I am dying 14.5 years earlier than expected.

But this perceived “loss” *always* occurs, no matter when we die! (In terms of the above figure, the expected value is always positive.) We can average this effect over the entire population, and find that on average males die 12.2 years earlier than expected, and females die 10.8 years earlier than expected.

**Reference**:

- Arias, E., United States Life Tables 2014,
*National Vital Statistics Reports*,**66**(4) August 2017 [PDF]

Following are the probabilities for the United States 2014 period life table used in this post, derived from the NVSR data in the above reference, extended to maximum age 120 using the methodology described in the technical notes.

Age P(all) P(male) P(female) =========================================================== 0 0.005831 0.006325 0.005313 1 0.000367843 0.000391508 0.000343167 2 0.000246463 0.000276133 0.000216767 3 0.000182814 0.000206546 0.000157072 4 0.000156953 0.000183668 0.000129216 5 0.000141037 0.000160804 0.000120255 6 0.000125127 0.000142914 0.000106328 7 0.000112203 0.000128008 0.0000963806 8 0.000100276 0.000112117 0.0000884231 9 0.0000913317 0.0000992073 0.0000834481 10 0.0000883454 0.0000932456 0.0000824477 11 0.0000952854 0.000103156 0.0000874072 12 0.000119095 0.000137857 0.000101304 13 0.000164729 0.000203286 0.000124134 14 0.000227209 0.000294457 0.000155893 15 0.000293617 0.000391501 0.000190617 16 0.000362946 0.000491412 0.000227306 17 0.000442117 0.000609009 0.000265957 18 0.000529115 0.000743227 0.000302594 19 0.000616971 0.000881116 0.000338207 20 0.00070566 0.00101964 0.000373784 21 0.000786255 0.00114394 0.000408332 22 0.000847887 0.0012343 0.000438875 23 0.000886654 0.00128494 0.000465417 24 0.000909534 0.00130883 0.000490933 25 0.000929392 0.0013228 0.0005174 26 0.000952147 0.00134161 0.000545804 27 0.000977786 0.00136426 0.00057515 28 0.0010063 0.00139366 0.000605432 29 0.00103864 0.00142878 0.00063566 30 0.00107285 0.0014657 0.000668788 31 0.00110792 0.00150147 0.000705793 32 0.00114483 0.0015361 0.000745674 33 0.00118454 0.00156959 0.000792356 34 0.00122898 0.00160582 0.000845811 35 0.00128495 0.00165443 0.000908956 36 0.00135141 0.00171632 0.000981746 37 0.00142538 0.00178654 0.00106021 38 0.00150387 0.0018631 0.00114136 39 0.00158781 0.00194786 0.00122516 40 0.00168489 0.00204935 0.00131843 41 0.00179886 0.00217314 0.00142304 42 0.00192761 0.00231898 0.00153401 43 0.00207581 0.00249327 0.00165615 44 0.00224899 0.00270038 0.00179419 45 0.00243725 0.00292846 0.00194116 46 0.00265083 0.0031884 0.00210855 47 0.0029103 0.00350311 0.00231349 48 0.00321588 0.00387243 0.0025555 49 0.0035468 0.00427362 0.00281771 50 0.00387592 0.00467212 0.00307957 51 0.00420287 0.00507013 0.00333606 52 0.00454693 0.00549737 0.00359937 53 0.00492128 0.00597165 0.0038758 54 0.00532664 0.00649003 0.00417147 55 0.00575619 0.00703581 0.00448667 56 0.00619215 0.00758275 0.00481225 57 0.00662626 0.00813029 0.00513737 58 0.00705499 0.00866988 0.00545972 59 0.00748745 0.00921066 0.00578812 60 0.00794918 0.00978879 0.0061402 61 0.0084469 0.010399 0.00653116 62 0.0089597 0.010994 0.00696556 63 0.00947691 0.0115436 0.00744933 64 0.0100035 0.0120603 0.00797895 65 0.0105466 0.0125691 0.00854801 66 0.0111425 0.0131347 0.00916566 67 0.0118165 0.0137895 0.00985298 68 0.0126025 0.0145881 0.010625 69 0.0135386 0.0155721 0.0115183 70 0.014622 0.016711 0.0125556 71 0.0157853 0.0179169 0.0136863 72 0.0169733 0.0191484 0.0148415 73 0.0181664 0.0203416 0.0160437 74 0.0193544 0.0214907 0.0172763 75 0.0205581 0.0226235 0.0185559 76 0.0219039 0.023887 0.0199909 77 0.0233782 0.0252875 0.0215559 78 0.0249405 0.0266573 0.0233399 79 0.0266659 0.0281283 0.0253501 80 0.0283006 0.0295587 0.0272207 81 0.0298041 0.0307938 0.0290306 82 0.0311707 0.0318902 0.0307088 83 0.0326118 0.0329808 0.0325375 84 0.0338734 0.0336728 0.0344093 85 0.0348103 0.0342521 0.0357896 86 0.0356915 0.0345144 0.0373244 87 0.036144 0.0342741 0.0384714 88 0.0361034 0.0334914 0.0391388 89 0.0355212 0.0321521 0.0392438 90 0.0343716 0.0302738 0.0387212 91 0.0326583 0.0279093 0.0375332 92 0.0304192 0.0251463 0.0356786 93 0.0277276 0.0221028 0.0331989 94 0.02469 0.0189177 0.030181 95 0.0214386 0.0157381 0.0267542 96 0.0181203 0.0127037 0.0230805 97 0.0148823 0.00993297 0.0193396 98 0.011857 0.00751144 0.0157101 99 0.00914934 0.00548606 0.0123497 100 0.00682791 0.0038652 0.00937893 101 0.0049216 0.00262443 0.0068711 102 0.00342273 0.00171608 0.00484973 103 0.00229461 0.00108015 0.00329442 104 0.00148199 0.000654343 0.00215221 105 0.000921789 0.000381551 0.00135156 106 0.000552114 0.000214241 0.000815772 107 0.00031851 0.000115918 0.00047333 108 0.000177059 0.0000604924 0.000264139 109 0.0000949139 0.0000304833 0.000141878 110 0.0000491106 0.0000148533 0.0000734294 111 0.0000245565 0.00000700891 0.0000366663 112 0.0000118822 0.00000320819 0.0000176914 113 0.00000557214 0.00000142697 0.00000826206 114 0.00000253659 0.000000617876 0.00000374136 115 0.00000112287 0.000000260924 0.00000164594 116 0.0000004842 0.00000010766 0.00000070483 117 0.000000203758 0.0000000434809 0.000000294373 118 0.0000000838243 0.0000000172192 0.000000120143 119 0.0000000337717 0.0000000066978 0.0000000480077 120 0.0000000216956 0.00000000409582 0.0000000304297]]>

The title question came up recently, which I think makes for an interesting combinatorics exercise. The idea is that, given the extent of “borrowing” of past musical ideas by later artists, are we in danger of running out of new music?

We can turn this into a combinatorics problem by focusing solely on *pitch *and* rhythm*: in a single bar of music, how many possible melodies are there, consisting of a sequence of notes and rests of varying pitch and duration? The point is that this number is *finite*; it may be astronomical, but *how* astronomical?

This is not a new question, and there are plenty of answers out there which make various simplifying assumptions. For example, Vsauce has a video, “Will We Ever Run Out of New Music?,” which in turn refers to a write-up, “How many melodies are there in the universe?,” describing a calculation based on a recurrence relation that effectively requires cutting segments of a bar exactly in halves– undercounting in a way that I suspect was not intentional, implicitly prohibiting even relatively simple melodies like “I’ll Be Home for Christmas.”

But the most common simplifying assumption seems to be a lack of treatment of *rests*— that is, only counting melodies consisting of a sequence of *notes*. Rests are an interesting wrinkle that complicates the counting problem: for example, a half note is different from two consecutive quarter notes of the same pitch, but a half rest *sounds the same* as two consecutive quarter rests. The objective of this post is to add this “expressive power” to the calculation of possible melodies.

**Solution**

Consider a single bar in 4/4 time, consisting of a sequence of whole, half, quarter, eighth, and sixteenth notes and/or rests, with notes chosen from 13 possible pitches, allowing melodies within an octave of the 12-pitch chromatic scale, but also allowing an octave jump (e.g., “Take Me Out to the Ball Game,” “Over the Rainbow,” etc.).

We can encode the choice of a single note of possible pitches with the following generating function, weighted by duration:

and all possible rests with

Then the generating function for the number of possible melodies is

Intuitively, a melody consists of zero or one rest, followed by a sequence of zero or more sub-sequences, each consisting of a note followed by zero or one rest. The coefficient is the number of one-bar melodies given the above constraints… but this counts two melodies as distinct even if they only differ in relative pitch. The number of possible melodies consisting of sequences of *intervals* confined to at most an octave jump is

where the +1 accounts for the single “silent melody” of a whole rest. The result is 3,674,912,999,046,911,152, or about 3.7 billion billion possible melodies.

**Results**

The machinery described above may be easily extended to consider different sets of assumptions: different time signatures, longer or shorter lists of possible notes to choose from, dotted notes and rests, triplets (e.g., the *Star Wars* theme), etc. The figure below shows the number of possible one-bar melodies for a variety of such assumptions.

As might be expected, dotted notes and/or rests do not affect the “space” of possible melodies nearly as much as note *duration*: halve the shortest allowable note value, and you very roughly double the number of “bits” in the representation of a melody. If we extend our expressive power to allow 32nd (possibly dotted) notes and rests, then there are 6,150,996,564,625,709,162,647,180,518,925,064,281,006 possible melodies.

Of course, these calculations only address the question of how many melodies are *possible*— not how many of such melodies are actually appealing to our human ears.

Every year, there are upsets and wild outcomes during the NCAA men’s basketball tournament. But this year felt, well, *wilder*. For example, for the first time in 136 games over the 34 years of the tournament’s current 64-team format, a #16 seed (UMBC) beat a #1 seed (Virginia) in the first round. (I refuse to acknowledge the abomination of the 4 “play in” games in the zero-th round.) And I am a Kansas State fan, who watched my #9 seed Wildcats beat *Kentucky*, a team that went to the Final Four in 4 of the last 8 years.

So I wondered whether this was indeed the “wildest” tournament ever… and it turns out that it was, by several reasonable metrics.

**Modeling game probabilities**

To compare the tournaments in different years, we assume that the probability of outcome of any particular game depends only on the *seeds* of the opposing teams. Schwertman et. al. (see reference below) suggest a reasonable model of the form

where is some measure of the “strength” of seed (ranging from 1 to 16), and the scale factor calibrates the range of resulting probabilities, selected here so that the most extreme value matches the current maximum likelihood estimate based on the 136 observations over the past 34 years.

One simple strength function is the linear , although this would suggest, for example, that #1 vs. #5 and #11 vs. #15 are essentially identical match-ups. A better fit is

where is the quantile of the normal distribution, and is the number of teams in all of Division I. The idea is that team strength is normally distributed, and the tournament invites the 64 teams in the upper tail of the distribution, as shown in the figure below.

**Probability of a perfect bracket**

Armed with these candidate models, I looked at all of the tournaments since 1985, the first year of the current 64-team format. I have provided summary data sets before (a search of this blog for “NCAA” will yield several posts on this subject), but this analysis required more raw data, all of which is now available at the usual location here.

For each year of the tournament, we can ask what is the probability of picking a perfect bracket in that year, correctly identifying the winners of all 63 games? Actually, there are three reasonable variants of this question:

- If we flip a coin to pick each game, what is the probability of picking every game correctly?
- If we pick a “chalk” bracket, always picking the favored higher-seeded (i.e., lower-numbered) team to win each game, what is the probability of picking every game correctly?
- If we managed to pick the perfect bracket for a given year, what is the prior probability of that particular outcome?

The answer to the first question is the 1 in , or the “1 in 9.2 quintillion” that appears in popular press. And this is always exactly correct, no matter how individual teams actually match up in any given year, as long as we are flipping a coin to guess the outcome of each game. But this isn’t very realistic, since seed match-ups *do* matter; a #1 seed will beat a #16 seed… well, *almost* all of the time.

So the second question is more interesting, but also more complicated, since it *does* depend on our model of how different seeds match up… but it doesn’t depend on which year of the tournament we’re talking about, at least as long as we always use the same model. Using the strength models described above, a chalk bracket has a probability of around 1 in 100 billion of being correct (1 in 186 billion for the linear strength model, or 1 in 90 billion for the normal strength model).

The third question is the motivation for this post: the probability of a given year’s actual outcome will generally lie somewhere between the other two “extremes.” How has this probability varied over the years, and was 2018 really an outlier? The results are shown in the figure below.

The constant black line at the bottom is the 1 in 9.2 quintillion coin flip. The constant red and blue lines at the top are the probabilities of a chalk bracket, assuming the linear or normal strength models, respectively.

And in between are the actual outcomes of each tournament. (*Aside*: I tried a bar chart for this, but I think the line plot more clearly shows the comparison of the two models, as well as both the maximum *and* minimum behavior that we’re interested in here.) This year’s 2018 tournament was indeed the most unlikely, so to speak, although it has close competition, all in this decade. At the other extreme, 2007 was the *most* likely bracket.

**Reference:**

- Schwertman, N., McCready, T., and Howard, L., Probability Models for the NCAA Regional Basketball Tournaments,
*The American Statistician*,**45**(1) February 1991, p. 35-38 [JSTOR]

- Paul says, “Neither Steve nor Ted was in on it.”
- Quinn says, “Ray wasn’t in on it, but Paul was.”
- Ray says, “If Ted was in on it, then so was Steve.”
- Steve says, “Paul wasn’t in on it, but Quinn was.”
- Ted says, “Quinn wasn’t in on it, but Paul was.”

You do not know which, nor even how many, of the five suspects were involved in the crime. However, you do know that every guilty suspect is lying, and every innocent suspect is telling the truth. Which suspect or suspects committed the crime?

I think puzzles similar to this one make good, fun homework problems in a discrete mathematics course introducing propositional logic. However, this particular puzzle is a bit more complex than the usual “Which one of three suspects is guilty?” type, not just because there are more suspects, but also because we don’t know *how many* suspects are guilty.

That added complexity is motivated by trying to transform this typically pencil-and-paper mathematical logic problem into a potentially nice computer science programming exercise: consider writing a program to *automate* solving this problem… or even better, writing a program to *generate new random instances* of problems like this one, while ensuring that the resulting puzzle has some reasonably “nice” properties. For example, the solution should be unique; but the puzzle should also be “interesting,” in that we should need *all* of the suspects’ statements to deduce who is guilty (that is, any proper subset of the statements should imply at least two distinct possible solutions).

This is a follow-up to one of last month’s posts that contained some images of randomly-generated “lozenge tilings.” The focus here is not on the tilings themselves, but on the *perfectly random* sampling technique due to Propp and Wilson known as “coupling from the past” (see references below) that was used to generate them. As is often the case here, I don’t have any new mathematics to contribute; the objective of this post is just to go beyond the pseudo-code descriptions of the technique in most of the literature, and provide working Python code with a couple of different example applications.

The basic problem is this: suppose that we want to sample from some probability distribution over a finite discrete space , and that we have an ergodic Markov chain whose steady state distribution is . An *approximate* sampling procedure is to pick an arbitrary initial state, then just run the chain for some large number of iterations, with the final state as your sample. This idea has been discussed here before, in the context of card shuffling, the board game Monopoly, as well as the lozenge tilings from last month.

For example, consider shuffling a deck of cards using the following iterative procedure: select a random *adjacent* pair of cards, and flip a coin to decide whether to put them in ascending or descending order (assuming any convenient total ordering, e.g. first by rank, then by suit alphabetically). Repeat for some “large” number of iterations.

There are two problems with this approach:

- It’s approximate; the longer we iterate the chain, the closer we get to the steady state distribution… but (usually) no matter when we stop, the distribution of the resulting state is never exactly .
- How many iterations are enough? For many Markov chains, the mixing time may be difficult to analyze or even estimate.

**Coupling from the past**

Coupling from the past can be used to address both of these problems. For a surprisingly large class of applications, it is possible to sample *exactly* from the stationary distribution of a chain, by iterating multiple realizations of the chain until they are “coupled,” without needing to know ahead of time when to stop.

First, suppose that we can express the random state transition behavior of the chain using a fixed, deterministic function , so that for a random variable uniformly distributed on the unit interval,

In other words, given a current state and random draw , the next state is .

Now consider a single infinite sequence of random draws . We will use this same *single* source of randomness to iterate *multiple* realizations of the chain, one for each of the possible initial states… but starting *in the past*, and running forward to time zero. More precisely, let’s focus on two particular chains with different initial states and at time , so that

(Notice how the more random draws we generate, the farther *back* in time they are used. More on this shortly.)

A key observation is that if at any point, then the chains are “coupled:” since they experience the same sequence of randomness influencing their behavior, they will continue to move in lockstep for all subsequent iterations as well, up to . Furthermore, if the states at time zero are *all* the same for *all* possible initial states run forward from time , then the distribution of that final state is the stationary distribution of the chain.

But what if all initial states *don’t* end at the same final state? This is where the single source of randomness is key: we can simply look farther back in the past, say time steps instead of , by extending our sequence of random draws… as long as we *re-use the existing random draws* to make the same “updates” to the states at later times (i.e., closer to the end time zero).

**Monotone coupling**

So all we need to do to perfectly sample from the stationary distribution is

- Choose a sufficiently large to look far enough into the past.
- Run realizations of the chain, one for each initial state, all starting at time and running up to time zero.
- As long as the final state is the same for all initial states, output as the sample. Otherwise, look farther back in the past (), generate additional random draws accordingly, and go back to step 2.

This doesn’t seem very helpful, since step 2 requires enumerating elements of the typically enormous state space. Fortunately, in many cases, including both lozenge tiling and card shuffling, it is possible to simplify the procedure by imposing a partial order on the state space– with minimum and maximum elements– that is preserved by the update function . That is, suppose that for all states and random draws , if , then .

Then instead of running the chain for *all* possible initial states, we can just run *two* chains, starting with the minimum and maximum elements in the partial order. If those two chains are coupled by time zero, then they will also “squeeze” any other initial state into the same coupling.

Following is the resulting Python implementation:

import random def monotone_cftp(mc): """Return stationary distribution sample using monotone CFTP.""" updates = [mc.random_update()] while True: s0, s1 = mc.min_max() for u in updates: s0.update(u) s1.update(u) if s0 == s1: break updates = [mc.random_update() for k in range(len(updates) + 1)] + updates return s0

Sort of like the Fisher-Yates shuffle, this is one of those algorithms that is potentially easy to implement incorrectly. In particular, note that the list of “updates,” which is traversed *in order* during iteration of the two chains, is *prepended* with additional blocks of random draws as needed to start farther back in the past. The figure below shows the resulting behavior, mapping each output of the random number generator to the time at which it is used to update the chains.

Coming back once again to the card shuffling example, the code below uses coupling from the past with random adjacent transpositions to generate an exactly uniform random permutation . (A similar example generating random lozenge tilings may be found at the usual location here.)

class Shuffle: def __init__(self, n, descending=False): self.p = list(range(n)) if descending: self.p.reverse() def min_max(self): n = len(self.p) return Shuffle(n, False), Shuffle(n, True) def __eq__(self, other): return self.p == other.p def random_update(self): return (random.randint(0, len(self.p) - 2), random.randint(0, 1)) def update(self, u): k, c = u if (c == 1) != (self.p[k] < self.p[k + 1]): self.p[k], self.p[k + 1] = self.p[k + 1], self.p[k] print(monotone_cftp(Shuffle(52)).p)

The minimum and maximum elements of the partial order are the identity and reversal permutations, as might be expected; but it’s an interesting puzzle to determine the actual partial order that these random adjacent transpositions preserve. Consider, for example, a similar process where we select a random adjacent pair of cards, then flip a coin to decide *whether to swap them or not* (vs. whether to put them in ascending or descending order). This process has the same uniform stationary distribution, but won’t work when applied to monotone coupling from the past.

**Re-generating vs. storing random draws**

One final note: although the above implementation is relatively easy to read, it may be prohibitively expensive to store all of the random draws as they are generated. The slightly more complex implementation below is identical in output behavior, but is more space-efficient by only storing “markers” in the random stream, re-generating the draws themselves as needed when looking farther back into the past.

import random def monotone_cftp(mc): """Return stationary distribution sample using monotone CFTP.""" updates = [(random.getstate(), 1)] while True: s0, s1 = mc.min_max() rng_next = None for rng, steps in updates: random.setstate(rng) for t in range(steps): u = mc.random_update() s0.update(u) s1.update(u) if rng_next is None: rng_next = random.getstate() if s0 == s1: break updates.insert(0, (rng_next, 2 ** len(updates))) random.setstate(rng_next) return s0

**References:**

- Propp, J. and Wilson, D., Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics,
*Random Structures and Algorithms*,**9**1996, p. 223-252 [PDF] - Wilson, D., Mixing times of lozenge tiling and card shuffling Markov chains,
*Annals of Applied Probability*,**14**(1) 2004, p. 274-325 [PDF]

I recently saw several different checks, all from the same bank, each with a different dollar amount printed in a fixed-width font, similar to the shortened example below:

`**PAY EXACTLY**twelve thousand thirty-four and 56/100****************`

Counting asterisks confirmed that the amount fields on each check were all padded to exactly the same length… which made me wonder, what is the largest check that the bank could print, using English words for the dollar amount?

Actually, that’s not quite the question I had in mind. There are very short names for very large numbers, but it’s not very helpful to say that a bank can cut a check for, say, “one googol” dollars, when most *smaller* amounts would not fit in the same length of field.

So to make the problem less linguistic and more mathematical, let’s instead ask the following more precise question: what is the largest integer such that *every* dollar amount from zero to can be printed in words using at most characters?

**Converting integers into words**

For short fields, such as in the case of the bank checks, brute force is sufficient; we just need a function to convert integers into words. This is a pretty common programming problem, with one Python implementation shown below, where `integer_name(n)`

works for any :

units = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten', 'eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen'] tens = ['zero', 'ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] powers = ['zero', 'thousand', 'million', 'billion', 'trillion', 'quadrillion', 'quintillion', 'sextillion', 'septillion', 'octillion', 'nonillion', 'decillion'] hundred = 'hundred' minus = 'minus' comma = ',' and_ = ' and' space = ' ' hyphen = '-' empty = '' def small_integer_name(n, use_and=False): s = empty if n >= 100: q, n = divmod(n, 100) s = units[q] + space + hundred + ( (and_ if use_and else empty) + space if n > 0 else empty) if n >= 20: q, n = divmod(n, 10) s += tens[q] + (hyphen if n > 0 else empty) return (s + units[n] if n > 0 else s) def integer_name(n, use_comma=False, use_and=False, power=0): if n < 0: return minus + space + integer_name(-n, use_comma, use_and) elif n == 0: return units[0] s = empty if n >= 1000: q, n = divmod(n, 1000) s = integer_name(q, use_comma, use_and, power + 1) + ( (comma if use_comma else empty) + space if n > 0 else empty) return (s + small_integer_name(n, use_and) + (space + powers[power] if power > 0 else empty) if n > 0 else s)

There are a couple of things to note. First, although typical American style does not include commas separating thousands, or the word *and* between hundreds and tens (e.g., “one hundred and twenty-three”), it’s nice to have the option here, since it affects the calculation of .

Also, “everything” is a variable, including the `hyphen`

, `comma`

, `space`

, even the `empty`

string. The addition operator is overloaded for string concatenation, but it’s easy to replace the string constants with, say, integer word counts, or syllable counts, or whatever, without changing the code. For example, we can automatically generate lousy haiku:

One hundred twenty-

one thousand seven hundred

seventy-seven

Coming back to the bank checks, it turns out that , so that the bank can print a check for any amount up to $1,113,322.99. Which doesn’t seem terribly large– I wonder if they fall back to using numerals if the amount doesn’t fit using words?

**Twitter scale**

This isn’t a new problem. A couple of years ago, FiveThirtyEight.com’s Riddler posed a problem involving the Twitter account @CountVonCount, of Sesame Street fame, essentially asking for … and then again asking for just last month, in response to the recent increase in maximum length of a tweet from 140 to 280 characters (in each case, less one character to account for an exclamation point).

These field widths are large enough that a brute force approach is no longer feasible. It’s a nice problem to implement a function to compute efficiently and exactly, with results as shown in the figure below.

All of the source code is available at the usual location here.

]]>In the figure below, a regular hexagon with side length 12 is tiled with “lozenges,” sort of like triangular dominoes, each consisting of a pair of unit-length equilateral triangles joined at a common side.

Each lozenge is in one of three possible orientations. Although the orientations appear random, if we count carefully, we find that there are exactly the same number of lozenges in each of the three orientations. In fact, no matter how we tile the hexagon, it is always the case that the number of lozenges (144 in this case) of each “type” is fixed. Can you prove this?

**Proof without words**

Although the (proof of the) Pythagorean theorem seems to be the most common example of a proof without words— usually a picture or diagram that doesn’t require any words to explain– this problem, with its “proof” below, is probably my favorite.

One reason I like this problem, particularly as motivation for student discussion, is that it is somewhat controversial. Is the above “proof without words” really a satisfactory proof?

**Theorem without words**

Moving away from random tilings for the moment, the original motivation for this post wasn’t actually a *proof* without words, but a *theorem* without words. I recently learned an interesting result I had not seen before, involving no more than high school-level physics, that I thought could be presented as an animation without any accompanying explanation:

Unfortunately, I’m not sure this quite succeeds as a “theorem without words.” As with the tiling proof, there is more going on here than the above animation arguably conveys. In particular, one of the most interesting things about this problem– that the animation *doesn’t* really show– is that the *shape* (that is, the eccentricity) of the ellipse of apogees is invariant: it does not depend on the speed of the projectile, or gravity, or any relationship between the two.

**More on random lozenge tilings**

Finally, some source code: although there are plenty of papers and web sites with images of random tilings like those above, I wanted to make my own images, and working code is harder to find. See here for my Python implementation for generating a random lozenge tiling of a hexagon of a given size. For example, the following figure shows a random tiling of a hexagon with side length 64:

My implementation is quick and dirty in the sense that it simply iterates the Markov chain of single-step up-or-down moves in the corresponding family of non-intersecting lattice paths, essentially shuffling long enough for the resulting random tiling to be *approximately* uniform. (Note that the tilings in the above images are more random than they might appear, despite the “frozen” regions near the corners.) It would be interesting to extend this implementation to use coupling from the past, to generate an *exactly* uniform random tiling.

**References:**

This post was motivated by a recent attempt to transform a photograph into a large digital print, in what I hoped would be a mathematically interesting way. The idea was pretty simple: convert the (originally color) image into a black-on-white line drawing– a white background, with a single, continuous, convoluted black curve, one pixel wide.

This isn’t a new idea. One example of how to do this is to convert the image into a solution of an instance of the traveling salesman problem, with more “cities” clustered in darker regions of the source image. But I wanted to do something slightly different, with more explicitly visible structure… which doesn’t necessarily translate to more visual appeal: draw a Hilbert space-filling curve, but vary the *order* of the curve (roughly, the depth of recursion) locally according to the gray level of the corresponding pixels of the source image.

After some experimenting, I settled on the transformation described in the figure below. Each pixel of the source image is “inflated” to an 8-by-8 block of pixels in the output, with a black pixel (lower left) represented by a second-order Hilbert curve, and a white pixel (upper left) by just a line segment directly connecting the endpoints of the block, with two additional gray levels in between, each connecting progressively more/fewer points along the curve.

**Example**

The figure below shows an example of creating an image for input to the algorithm. There are two challenges to consider:

The 8-fold inflation of each pixel means that if we want the final output to be 1024-by-1024, then the input image must be 128-by-128, as shown here. (For my project, I could afford a 512-by-512 input, with larger than single-pixel steps between points on the curve, yielding output suitable for a 36-inch print.)*Size:*The color in the input image must be quantized to just four gray levels, using your favorite photo editor. (I did this in Mathematica.)**Grayscale:**

The figure below shows the resulting output. You may have to zoom in to see the details of the curve, especially in the black regions.

**Source code**

Following is the Python source that does the transformation. It uses the Hilbert curve encoding module from my allRGB experiment, and I used Pygame for the image formatting so that I could watch the output as it was created.

import hilbert import pygame BLACK = (0, 0, 0, 255) DARK_GRAY = (85, 85, 85, 255) LIGHT_GRAY = (160, 160, 160, 255) WHITE = (255, 255, 255, 255) STEPS = {BLACK: [1] * 15, DARK_GRAY: [1, 1, 1, 1, 4, 1, 1, 5], LIGHT_GRAY: [4, 7, 4], WHITE: [15]} class Halftone: def __init__(self, image, step): self.h = hilbert.Hilbert(2) self.index = -1 self.pos = (0, 0) width, height = image.get_size() self.target = pygame.Surface((step * 4 * width, step * 4 * height)) self.target.fill(WHITE) for pixel in range(width * height): self.move(1, step) for n in STEPS[tuple(image.get_at([w // 4 for w in self.pos]))]: self.move(n, step) def move(self, n, step): self.index = self.index + n next_pos = self.h.encode(self.index) pygame.draw.line(self.target, BLACK, [step * w for w in self.pos], [step * w for w in next_pos]) self.pos = next_pos if __name__ == '__main__': import sys step = int(sys.argv[1]) for filename in sys.argv[2:]: pygame.image.save(Halftone(pygame.image.load(filename), step).target, filename + '.ht.png')]]>

Earlier this year, I spent some time calculating the probability of a Scrabble “bingo:” drawing a rack of 7 tiles and playing all of them in a single turn to spell a 7-letter word. The interesting part of the analysis was the use of the Google Books Ngrams data set to limit the dictionary of playable words by their frequency of occurrence, to account for novice players like me that might only know “easier” words in more common use. The result was that the probability of drawing a rack of 7 letters that can play a 7-letter word is about 0.132601… if we allow the *entire* official Scrabble dictionary, including words like *zygoses* (?). The probability is only about 0.07 if we assume that I only know about a third– the most frequently occurring third– of the 7-letter words in the dictionary.

But given just how poorly I play Scrabble, it seems optimistic to focus on a bingo. Instead of asking whether we can play a 7-letter word using the entire rack, let’s consider what turns out to be a much harder problem, namely, whether the rack is *playable at all*: that is,

**Problem:** What is the probability that a randomly drawn rack is *playable*, i.e., contains some subset of tiles, not necessarily all 7, that spell a word in the dictionary?

**Minimal subracks**

The first interesting aspect of this problem is the size of the dictionary. There are 187,632 words in the 2014 *Official Tournament and Club Word List* (the latest for which an electronic version is accessible)… but we can immediately discard nearly 70% of them, those with more than 7 letters, leaving 56,624 words with at most 7 letters. (There is exactly one such word, *pizzazz*, that I’ll leave in our list, despite the fact that it is one of 13 words in the dictionary with no hope of ever being played: it has too many *z*‘s!)

But we can trim the list further, first by noting that the order of letters in a word doesn’t matter– this helps a little but not much– and second by noting that some words may contain other shorter words as subsets. For example, the single most frequently occurring word, *the*, contains the shorter word *he*. So when evaluating a rack to see whether it is playable or not, we only need to check if we can play *he*; if we can’t, then we can’t play *the*, either.

In other words, we only need to consider *minimal subracks*: the minimal elements of the inclusion partial order on sets of tiles spelling words in the dictionary. This is a huge savings: instead of 56,624 words with at most 7 letters, we only need to consider the following 223 minimal “words,” or unordered subsets of tiles, with blank tiles indicated by an underscore:

__, _a, _b, _cv, _d, _e, _f, _g, _h, _i, _j, _k, _l, _m, _n, _o, _p, _q, _r, _s, _t, _u, _vv, _w, _x, _y, _z, aa, ab, aco, acv, ad, ae, af, ag, ah, ai, ak, al, am, an, aov, ap, aqu, ar, as, at, auv, avv, aw, ax, ay, az, bbu, bcu, bdu, be, bfu, bgu, bi, bklu, bllu, bo, brr, bru, by, cciiv, ccko, cdu, cee, cei, cekk, ceu, cffu, cfku, cgku, cgllyy, chly, chrtw, ciirr, ciy, cklu, cllu, cmw, coo, coz, cru, cry, cuz, ddu, de, dfu, dgu, di, djuy, dkuu, dlu, do, dru, dry, duw, eeg, eej, eek, eequu, eev, eez, ef, egg, egk, egv, eh, eiv, eju, eku, ekz, el, em, en, eo, ep, er, es, et, ew, ex, ey, ffpt, fgu, fi, flu, fly, fo, fru, fry, ghlly, gi, gju, glu, go, gpy, grr, gru, gsyyyz, guv, guy, hi, hm, hnt, ho, hpt, hpy, hs, hty, hu, hwy, iirvz, ijzz, ik, il, im, in, io, ip, iq, irry, is, it, ivy, iwz, ix, jjuu, jkuu, jo, kkoo, klru, kouz, kruu, kst, ksy, kuy, lllu, llxyy, lnxy, lo, lpy, lsy, luu, luv, mm, mo, mu, my, no, nsy, nu, nwy, ooz, op, or, os, ot, ow, ox, oy, ppty, pry, pst, psy, ptyy, pu, pxy, rty, ruy, rwy, sty, su, tu, uuyz, uwz, ux, uzz, zzz

The figure below expands on this calculation, showing the number of minimal subracks on the *y*-axis if we limit the dictionary of words we want to be able to play in various ways: either by frequency on the *x*-axis, where dumber is to the left; or by minimum word length, where the bottom curve labeled “>=2” is the entire dictionary, and the top curve labeled “>=7” requires a bingo.

As this figure shows, we benefit greatly– computationally, that is– from being able to play short words. At the other extreme, if we require a bingo, then effectively *every word is a minimal subrack*, and we actually suffer further inflation by including the various ways to spell each word with blanks.

**Finding a subrack in a trie**

At this point, the goal is to compute the probability that a randomly drawn rack of 7 tiles contains at least one of the minimal subracks described above. It turns out that there are only 3,199,724 distinct (but not equally likely) possible racks, so it *should* be feasible to simply enumerate them, testing each rack one by one, accumulating the frequency of those that are playable… as long as each test for containing a minimal subrack is reasonably fast. (Even if we just wanted to *estimate* the probability via Monte Carlo simulation by sampling randomly drawn racks, we still need the same fast test for playability.)

This problem is ready-made for a trie data structure, an ordered tree where each edge is labeled with a letter (or blank), and each leaf vertex represents a minimal playable subrack containing the *sorted* multiset of letters along the path to the leaf. (We don’t have to “mark” any non-leaf vertices; only leaves represent playable words, since we reduced the dictionary to the antichain of *minimal* subracks.)

The following Python code represents a trie of multisets as a nested dictionary with single-element keys (letters in this case), implementing the two operations needed for our purpose:

- Inserting a “word” multiset given as a sorted list of elements (or a sorted string of characters in this case),
- Testing whether a given word (i.e., a drawn rack of 7 tiles, also sorted) contains a word in the trie as a subset.

Insertion is straightforward; the recursive subset test is more interesting:

def trie_insert(t, word): for letter in word: if not letter in t: t[letter] = {} t = t[letter] def trie_subset(t, word): if not t: return True if not word: return False return (word[0] in t and trie_subset(t[word[0]], word[1:]) or trie_subset(t, word[1:]))

**Enumerating racks**

Armed with a test for whether a given rack is playable, it remains to enumerate all possible racks, testing each one in turn. In more general terms, we have a multiset of 100 tiles of 27 types, and we want to enumerate all possible 7-subsets. The following Python code does this, representing a multiset (or subset) as a list of *counts* of each element type. This is of interest mainly because it’s my first occasion to use the yield from syntax:

def multisubsets(multiset, r, prefix=[]): if r == 0: yield prefix + [0] * len(multiset) elif len(multiset) > 0: for k in range(min(multiset[0], r) + 1): yield from multisubsets(multiset[1:], r - k, prefix + [k])

(This and the rest of the Scrabble-specific code are available at the usual location here.)

The final results are shown in the figure below, with convention for each curve similar to the previous figure.

For example, the bottom curve essentially reproduces the earlier bingo analysis of the probability of playing a word of at least (i.e., exactly) 7 letters from a randomly drawn rack. At the other extreme, if we have the entire dictionary at our disposal (with all those short words of >=2 letters), it’s really hard to draw an unplayable rack (probability 0.00788349). Even if we limit ourselves to just the 100 or so most frequently occurring words– many of which are still very short– we can still play 9 out of 10 racks.

In between, suppose that even though we might *know* many short words, ego prevents actually playing anything shorter than, say, 5 letters in the first turn. Now the probabilities start to depend more critically on whether we are an expert who knows most of the words in the official dictionary, or a novice who might only know a third or fewer of them.

Suppose that Alice and Bob play a game for a dollar: they roll a single six-sided die repeatedly, until either:

- Alice wins if they observe each of the six possible faces at least once, or
- Bob wins if they observe any one face six times.

Would you rather be Alice or Bob in this scenario? Or does it matter? You can play a similar game with a deck of playing cards: shuffle the deck, and deal one card at a time from the top of the deck, with Alice winning when all four suits are dealt, and Bob winning when any particular suit is dealt four times. (This version has the advantage of providing a built-in record of the history of deals, in case of argument over who actually wins.) Again, would you rather be Alice or Bob?

It turns out that Alice has a distinct advantage in both games, winning nearly three times more often than Bob in the dice version, and nearly twice as often in the card version. The objective of this post is to describe some interesting mathematics involved in these games, and relate them to the game of Bingo, where a similar phenomenon is described in a recent *Math Horizons* article (see reference below): the winning Bingo card among multiple players is much more likely to have a *horizontal* bingo (all numbers in some row) than vertical (all numbers in some column).

**Bingo with a single card**

First, let’s describe how Bingo works with just one player. A Bingo card is a 5-by-5 grid of numbers, with each column containing 5 numbers randomly selected without replacement from 15 possibilities: the first “B” column is randomly selected from the numbers 1-15, the second “I” column is selected from 16-30, the third column from 31-45, the fourth column from 46-60, and the fifth “O” column from 61-75. An example is shown below.

A “caller” randomly draws, without replacement, from a pool of balls numbered 1 through 75, calling each number in turn as it is drawn, with the player marking the called number if it is present on his or her card. The player wins by marking all 5 squares in any row, column, or diagonal. (One minor wrinkle in this setup is that, in typical American-style Bingo, the center square is “free,” considered to be already marked before any numbers are called.)

It will be useful to generalize this setup with parameters , where each card is with each column selected from possible values, so that standard Bingo corresponds to .

We can compute the probability distribution of the number of draws required for a single card to win. Bill Butler describes one approach, enumerating the possible partially-marked cards and computing the probability of at least one bingo for each such marking.

Alternatively, we can use inclusion-exclusion to compute the cumulative distribution directly, by enumerating just the possible combinations of horizontal, vertical, and diagonal bingos (of which there are 5, 5, and 2, respectively) on a card with marks. In Mathematica:

bingoSets[n_, free_: True, diag_: True] := Module[{card = Partition[Range[n^2], n]}, If[free, card[[Ceiling[n/2], Ceiling[n/2]]] = 0]; DeleteCases[Join[ card, Transpose[card], If[diag, {Diagonal[card], Diagonal[Reverse[card]]}, {}]], 0, Infinity]] bingoCDF[k_, nm_, bingos_] := Module[{j}, 1 - Total@Map[ (j = Length[Union @@ #]; (-1)^Length[#] Binomial[nm - j, k - j]) &, Subsets[bingos]]/Binomial[nm, k]] bingos = bingoSets[5, False, True]; cdf = Table[bingoCDF[k, 75, bingos], {k, 1, 75}];

(Note the optional arguments specifying whether the center square is free, and whether diagonal bingo is allowed. It will be convenient shortly to consider a simplified version of the game, where these two “special” rules are discarded.)

The following figure shows the resulting distribution, with the expected number of 43.546 draws shown in red.

**Independence with 2 or more cards**

Before getting to the “horizontal is more likely than vertical” phenomenon, it’s worth pointing out another non-intuitive aspect of Bingo. If instead of just a single card, we have a game with multiple players, possibly with *thousands* of different cards, what is the distribution of number of draws until someone wins?

If is the cumulative distribution for a *single* card as computed above, then since each of multiple cards is randomly– and *independently–* “generated,” intuitively it seems like the probability of at least one winning bingo among cards in at most draws should be given by

Butler uses exactly this approach. However, this is incorrect; although the *values* in the squares of multiple cards are independent, the presence or absence of winning bingos are not. Perhaps the best way to see this is to consider a “smaller” simplified version of the game, with , so that there are only four equally likely possible distinct cards:

Let’s further simplify the game so that only horizontal and vertical bingos are allowed, with no diagonals. Then the game must end after either two or three draws: with a single card, it ends in two draws with probability 2/3. However, with two cards, the probability of a bingo in two draws is 5/6, not 1-(1-2/3)^2=8/9.

**Horizontal bingos with many cards**

Finally, let’s come back to the initial problem: suppose that there are a large number of players, with so many cards in play that we are effectively guaranteed a winner as soon as either:

- At least one number from each of the column groups is drawn, resulting in a horizontal bingo on some card; or
- At least of possible numbers is drawn from any one particular column group, resulting in a vertical bingo on some card.

(Let’s ignore the free square and diagonal bingos for now; the former is easily handled but unnecessarily complicates the analysis, while the latter would mean that (1) and (2) are not mutually exclusive.)

Then the interesting observation is that a horizontal bingo (1) is over three times more likely to occur than a vertical bingo (2). Furthermore, this setup– Bingo with a large number of cards– is effectively the same as the card and dice games described in the introduction: Bingo is , the card game is , and the dice version is effectively .

The *Math Horizons* article referenced below describes an approach to calculating these probabilities, which involves enumerating integer partitions. However, this problem is ready-made for generating functions, which takes care of the partition house-keeping for us: let’s define

so that, for example, for Bingo with no free square,

Intuitively, each factor corresponds to a column, where each coefficient of indicates the number of ways to draw exactly numbers from that column (with some minimum number from each column specified by ). The overall coefficient of indicates the number of ways to draw numbers in total, *with neither a horizontal nor vertical bingo*.

Then using the notation from the article, the probability of a horizontal bingo on *exactly* the -th draw is

and the probability of a vertical bingo on exactly the -th draw is

The summation

over all possible numbers of draws yields the overall probability of about 0.752 that a horizontal bingo is observed before a vertical one. Similarly, for the card game with , the probability that Alice wins is 22543417/34165005, or about 0.66. For the dice game– which requires a slight modification to the above formulation, left as an exercise for the reader– Alice wins with probability about 0.747.

**Reference:**

- Benjamin, A., Kisenwether, J., and Weiss, B., The Bingo Paradox,
*Math Horizons*,**25**(1) September 2017, p. 18-21 [PDF]