On average, we die a decade earlier than expected

“Doctors say he’s got a 50/50 chance of living… though there’s only a 10% chance of that.”

I’ve lately had occasion to contemplate my own mortality. How long should I expect to live? The most recent life table published by the Centers for Disease Control (see the reference at the end of this post) indicates an expected lifespan of 76.5 years for a male. This is based on a model of age at death as a random variable X with the probability density shown in the following figure.

Probability distributions of age at death based on the United States period life table for 2014.

The expected lifespan of 76.5 years is E[X] (using the red curve for males). In other words, if we observed a large number of hypothetical (male) infants born in the reference period 2014– and they continued to experience 2014 mortality rates throughout their lifetimes– then their ages at death would follow the above distribution, with an average of 76.5 years.

However, I have more information now: I have already survived roughly four decades of life. So it makes sense to ask, what is my conditional expected age at death, given that I have already survived to, say, age 40? In other words, what is E[X | X \geq 40]?

This value is 78.8 years; I can expect to live to a greater age now than I thought I would when I was first born. The following figure shows this conditional expected age at death E[X | X \geq x], as well as the corresponding expected additional lifespan E[X-x | X \geq x], as a function of current age x.

Conditional expected age at death and expected additional lifespan, vs. current age.

For another example, suppose that I survive to age 70. Instead of expecting just another 6.5 years, my expected additional lifespan has jumped to 14.5 years.

Which brings us to the interesting observation motivating this post: suppose instead that I die at age 70. I will have missed out on an additional 14.5 years of life on average, compared to the rest of the septuagenarians around me. Put another way, at the moment of my death, I perceive that I am dying 14.5 years earlier than expected.

But this perceived “loss” always occurs, no matter when we die! (In terms of the above figure, the expected value E[X-x | X \geq x] is always positive.) We can average this effect over the entire population, and find that on average males die 12.2 years earlier than expected, and females die 10.8 years earlier than expected.

Reference:

  1. Arias, E., United States Life Tables 2014, National Vital Statistics Reports, 66(4) August 2017 [PDF]

Following are the probabilities P(\left\lfloor{X}\right\rfloor = x) for the United States 2014 period life table used in this post, derived from the NVSR data in the above reference, extended to maximum age 120 using the methodology described in the technical notes.

Age   P(all)              P(male)             P(female)
===========================================================
  0 0.005831            0.006325            0.005313
  1 0.000367843         0.000391508         0.000343167
  2 0.000246463         0.000276133         0.000216767
  3 0.000182814         0.000206546         0.000157072
  4 0.000156953         0.000183668         0.000129216
  5 0.000141037         0.000160804         0.000120255
  6 0.000125127         0.000142914         0.000106328
  7 0.000112203         0.000128008         0.0000963806
  8 0.000100276         0.000112117         0.0000884231
  9 0.0000913317        0.0000992073        0.0000834481
 10 0.0000883454        0.0000932456        0.0000824477
 11 0.0000952854        0.000103156         0.0000874072
 12 0.000119095         0.000137857         0.000101304
 13 0.000164729         0.000203286         0.000124134
 14 0.000227209         0.000294457         0.000155893
 15 0.000293617         0.000391501         0.000190617
 16 0.000362946         0.000491412         0.000227306
 17 0.000442117         0.000609009         0.000265957
 18 0.000529115         0.000743227         0.000302594
 19 0.000616971         0.000881116         0.000338207
 20 0.00070566          0.00101964          0.000373784
 21 0.000786255         0.00114394          0.000408332
 22 0.000847887         0.0012343           0.000438875
 23 0.000886654         0.00128494          0.000465417
 24 0.000909534         0.00130883          0.000490933
 25 0.000929392         0.0013228           0.0005174
 26 0.000952147         0.00134161          0.000545804
 27 0.000977786         0.00136426          0.00057515
 28 0.0010063           0.00139366          0.000605432
 29 0.00103864          0.00142878          0.00063566
 30 0.00107285          0.0014657           0.000668788
 31 0.00110792          0.00150147          0.000705793
 32 0.00114483          0.0015361           0.000745674
 33 0.00118454          0.00156959          0.000792356
 34 0.00122898          0.00160582          0.000845811
 35 0.00128495          0.00165443          0.000908956
 36 0.00135141          0.00171632          0.000981746
 37 0.00142538          0.00178654          0.00106021
 38 0.00150387          0.0018631           0.00114136
 39 0.00158781          0.00194786          0.00122516
 40 0.00168489          0.00204935          0.00131843
 41 0.00179886          0.00217314          0.00142304
 42 0.00192761          0.00231898          0.00153401
 43 0.00207581          0.00249327          0.00165615
 44 0.00224899          0.00270038          0.00179419
 45 0.00243725          0.00292846          0.00194116
 46 0.00265083          0.0031884           0.00210855
 47 0.0029103           0.00350311          0.00231349
 48 0.00321588          0.00387243          0.0025555
 49 0.0035468           0.00427362          0.00281771
 50 0.00387592          0.00467212          0.00307957
 51 0.00420287          0.00507013          0.00333606
 52 0.00454693          0.00549737          0.00359937
 53 0.00492128          0.00597165          0.0038758
 54 0.00532664          0.00649003          0.00417147
 55 0.00575619          0.00703581          0.00448667
 56 0.00619215          0.00758275          0.00481225
 57 0.00662626          0.00813029          0.00513737
 58 0.00705499          0.00866988          0.00545972
 59 0.00748745          0.00921066          0.00578812
 60 0.00794918          0.00978879          0.0061402
 61 0.0084469           0.010399            0.00653116
 62 0.0089597           0.010994            0.00696556
 63 0.00947691          0.0115436           0.00744933
 64 0.0100035           0.0120603           0.00797895
 65 0.0105466           0.0125691           0.00854801
 66 0.0111425           0.0131347           0.00916566
 67 0.0118165           0.0137895           0.00985298
 68 0.0126025           0.0145881           0.010625
 69 0.0135386           0.0155721           0.0115183
 70 0.014622            0.016711            0.0125556
 71 0.0157853           0.0179169           0.0136863
 72 0.0169733           0.0191484           0.0148415
 73 0.0181664           0.0203416           0.0160437
 74 0.0193544           0.0214907           0.0172763
 75 0.0205581           0.0226235           0.0185559
 76 0.0219039           0.023887            0.0199909
 77 0.0233782           0.0252875           0.0215559
 78 0.0249405           0.0266573           0.0233399
 79 0.0266659           0.0281283           0.0253501
 80 0.0283006           0.0295587           0.0272207
 81 0.0298041           0.0307938           0.0290306
 82 0.0311707           0.0318902           0.0307088
 83 0.0326118           0.0329808           0.0325375
 84 0.0338734           0.0336728           0.0344093
 85 0.0348103           0.0342521           0.0357896
 86 0.0356915           0.0345144           0.0373244
 87 0.036144            0.0342741           0.0384714
 88 0.0361034           0.0334914           0.0391388
 89 0.0355212           0.0321521           0.0392438
 90 0.0343716           0.0302738           0.0387212
 91 0.0326583           0.0279093           0.0375332
 92 0.0304192           0.0251463           0.0356786
 93 0.0277276           0.0221028           0.0331989
 94 0.02469             0.0189177           0.030181
 95 0.0214386           0.0157381           0.0267542
 96 0.0181203           0.0127037           0.0230805
 97 0.0148823           0.00993297          0.0193396
 98 0.011857            0.00751144          0.0157101
 99 0.00914934          0.00548606          0.0123497
100 0.00682791          0.0038652           0.00937893
101 0.0049216           0.00262443          0.0068711
102 0.00342273          0.00171608          0.00484973
103 0.00229461          0.00108015          0.00329442
104 0.00148199          0.000654343         0.00215221
105 0.000921789         0.000381551         0.00135156
106 0.000552114         0.000214241         0.000815772
107 0.00031851          0.000115918         0.00047333
108 0.000177059         0.0000604924        0.000264139
109 0.0000949139        0.0000304833        0.000141878
110 0.0000491106        0.0000148533        0.0000734294
111 0.0000245565        0.00000700891       0.0000366663
112 0.0000118822        0.00000320819       0.0000176914
113 0.00000557214       0.00000142697       0.00000826206
114 0.00000253659       0.000000617876      0.00000374136
115 0.00000112287       0.000000260924      0.00000164594
116 0.0000004842        0.00000010766       0.00000070483
117 0.000000203758      0.0000000434809     0.000000294373
118 0.0000000838243     0.0000000172192     0.000000120143
119 0.0000000337717     0.0000000066978     0.0000000480077
120 0.0000000216956     0.00000000409582    0.0000000304297
Posted in Uncategorized | Leave a comment

How many melodies are there?

Introduction

The title question came up recently, which I think makes for an interesting combinatorics exercise. The idea is that, given the extent of “borrowing” of past musical ideas by later artists, are we in danger of running out of new music?

We can turn this into a combinatorics problem by focusing solely on pitch and rhythm: in a single bar of music, how many possible melodies are there, consisting of a sequence of notes and rests of varying pitch and duration? The point is that this number is finite; it may be astronomical, but how astronomical?

This is not a new question, and there are plenty of answers out there which make various simplifying assumptions. For example, Vsauce has a video, “Will We Ever Run Out of New Music?,” which in turn refers to a write-up, “How many melodies are there in the universe?,” describing a calculation based on a recurrence relation that effectively requires cutting segments of a bar exactly in halves– undercounting in a way that I suspect was not intentional, implicitly prohibiting even relatively simple melodies like “I’ll Be Home for Christmas.”

But the most common simplifying assumption seems to be a lack of treatment of rests— that is, only counting melodies consisting of a sequence of notes. Rests are an interesting wrinkle that complicates the counting problem: for example, a half note is different from two consecutive quarter notes of the same pitch, but a half rest sounds the same as two consecutive quarter rests. The objective of this post is to add this “expressive power” to the calculation of possible melodies.

Solution

Consider a single bar in 4/4 time, consisting of a sequence of whole, half, quarter, eighth, and sixteenth notes and/or rests, with notes chosen from 13 possible pitches, allowing melodies within an octave of the 12-pitch chromatic scale, but also allowing an octave jump (e.g., “Take Me Out to the Ball Game,” “Over the Rainbow,” etc.).

Twelve notes of the chromatic scale, and a thirteenth allowing melodies with an octave jump.

We can encode the choice of a single note of n possible pitches with the following generating function, weighted by duration:

g_n(x) = n(x+x^2+x^4+x^8+x^{16})

and all possible rests with

h(x) = \frac{x}{1-x}

Then the generating function for the number of possible melodies is

f_n(x) = \frac{1+h(x)}{1-g_n(x)(1+h(x))}

Intuitively, a melody consists of zero or one rest, followed by a sequence of zero or more sub-sequences, each consisting of a note followed by zero or one rest. The coefficient [x^{16}]f_{13}(x) is the number of one-bar melodies given the above constraints… but this counts two melodies as distinct even if they only differ in relative pitch. The number of possible melodies consisting of sequences of intervals confined to at most an octave jump is

[x^{16}]f_{13}(x) - [x^{16}]f_{12}(x) + 1

where the +1 accounts for the single “silent melody” of a whole rest. The result is 3,674,912,999,046,911,152, or about 3.7 billion billion possible melodies.

Results

The machinery described above may be easily extended to consider different sets of assumptions: different time signatures, longer or shorter lists of possible notes to choose from, dotted notes and rests, triplets (e.g., the Star Wars theme), etc. The figure below shows the number of possible one-bar melodies for a variety of such assumptions.

As might be expected, dotted notes and/or rests do not affect the “space” of possible melodies nearly as much as note duration: halve the shortest allowable note value, and you very roughly double the number of “bits” in the representation of a melody. If we extend our expressive power to allow 32nd (possibly dotted) notes and rests, then there are 6,150,996,564,625,709,162,647,180,518,925,064,281,006 possible melodies.

Of course, these calculations only address the question of how many melodies are possible— not how many of such melodies are actually appealing to our human ears.

Posted in Uncategorized | Leave a comment

Picking a perfect NCAA bracket: 2018 was the most unlikely tournament so far

Introduction

Every year, there are upsets and wild outcomes during the NCAA men’s basketball tournament. But this year felt, well, wilder. For example, for the first time in 136 games over the 34 years of the tournament’s current 64-team format, a #16 seed (UMBC) beat a #1 seed (Virginia) in the first round. (I refuse to acknowledge the abomination of the 4 “play in” games in the zero-th round.) And I am a Kansas State fan, who watched my #9 seed Wildcats beat Kentucky, a team that went to the Final Four in 4 of the last 8 years.

So I wondered whether this was indeed the “wildest” tournament ever… and it turns out that it was, by several reasonable metrics.

Modeling game probabilities

To compare the tournaments in different years, we assume that the probability of outcome of any particular game depends only on the seeds of the opposing teams. Schwertman et. al. (see reference below) suggest a reasonable model of the form

p_{i,j} = 1-p_{j,i} = \frac{1}{2}+k(s_i-s_j)

where s_i is some measure of the “strength” of seed i (ranging from 1 to 16), and the scale factor k calibrates the range of resulting probabilities, selected here so that the most extreme value p_{16,1}=1/136 matches the current maximum likelihood estimate based on the 136 observations over the past 34 years.

One simple strength function is the linear s_i=-i, although this would suggest, for example, that #1 vs. #5 and #11 vs. #15 are essentially identical match-ups. A better fit is

s_i = \Phi^{-1}(1-\frac{4i}{n})

where \Phi is the quantile of the normal distribution, and n=351 is the number of teams in all of Division I. The idea is that team strength is normally distributed, and the tournament invites the 64 teams in the upper tail of the distribution, as shown in the figure below.

Normally-distributed strength s_i for each seed.

Probability of a perfect bracket

Armed with these candidate models, I looked at all of the tournaments since 1985, the first year of the current 64-team format. I have provided summary data sets before (a search of this blog for “NCAA” will yield several posts on this subject), but this analysis required more raw data, all of which is now available at the usual location here.

For each year of the tournament, we can ask what is the probability of picking a perfect bracket in that year, correctly identifying the winners of all 63 games? Actually, there are three reasonable variants of this question:

  1. If we flip a coin to pick each game, what is the probability of picking every game correctly?
  2. If we pick a “chalk” bracket, always picking the favored higher-seeded (i.e., lower-numbered) team to win each game, what is the probability of picking every game correctly?
  3. If we managed to pick the perfect bracket for a given year, what is the prior probability of that particular outcome?

The answer to the first question is the 1 in 2^{63}, or the “1 in 9.2 quintillion” that appears in popular press. And this is always exactly correct, no matter how individual teams actually match up in any given year, as long as we are flipping a coin to guess the outcome of each game. But this isn’t very realistic, since seed match-ups do matter; a #1 seed will beat a #16 seed… well, almost all of the time.

So the second question is more interesting, but also more complicated, since it does depend on our model of how different seeds match up… but it doesn’t depend on which year of the tournament we’re talking about, at least as long as we always use the same model. Using the strength models described above, a chalk bracket has a probability of around 1 in 100 billion of being correct (1 in 186 billion for the linear strength model, or 1 in 90 billion for the normal strength model).

The third question is the motivation for this post: the probability of a given year’s actual outcome will generally lie somewhere between the other two “extremes.” How has this probability varied over the years, and was 2018 really an outlier? The results are shown in the figure below.

Probability of a perfect bracket, 1985-2018.

The constant black line at the bottom is the 1 in 9.2 quintillion coin flip. The constant red and blue lines at the top are the probabilities of a chalk bracket, assuming the linear or normal strength models, respectively.

And in between are the actual outcomes of each tournament. (Aside: I tried a bar chart for this, but I think the line plot more clearly shows the comparison of the two models, as well as both the maximum and minimum behavior that we’re interested in here.) This year’s 2018 tournament was indeed the most unlikely, so to speak, although it has close competition, all in this decade. At the other extreme, 2007 was the most likely bracket.

Reference:

  1. Schwertman, N., McCready, T., and Howard, L., Probability Models for the NCAA Regional Basketball Tournaments, The American Statistician45(1) February 1991, p. 35-38 [JSTOR]
Posted in Uncategorized | 2 Comments

Whodunit logic puzzle

You are a detective investigating a robbery, with five suspects having made the following statements:

  • Paul says, “Neither Steve nor Ted was in on it.”
  • Quinn says, “Ray wasn’t in on it, but Paul was.”
  • Ray says, “If Ted was in on it, then so was Steve.”
  • Steve says, “Paul wasn’t in on it, but Quinn was.”
  • Ted says, “Quinn wasn’t in on it, but Paul was.”

You do not know which, nor even how many, of the five suspects were involved in the crime. However, you do know that every guilty suspect is lying, and every innocent suspect is telling the truth. Which suspect or suspects committed the crime?

I think puzzles similar to this one make good, fun homework problems in a discrete mathematics course introducing propositional logic. However, this particular puzzle is a bit more complex than the usual “Which one of three suspects is guilty?” type, not just because there are more suspects, but also because we don’t know how many suspects are guilty.

That added complexity is motivated by trying to transform this typically pencil-and-paper mathematical logic problem into a potentially nice computer science programming exercise: consider writing a program to automate solving this problem… or even better, writing a program to generate new random instances of problems like this one, while ensuring that the resulting puzzle has some reasonably “nice” properties. For example, the solution should be unique; but the puzzle should also be “interesting,” in that we should need all of the suspects’ statements to deduce who is guilty (that is, any proper subset of the statements should imply at least two distinct possible solutions).

Posted in Uncategorized | 2 Comments

Coupling from the past

Introduction

This is a follow-up to one of last month’s posts that contained some images of randomly-generated “lozenge tilings.” The focus here is not on the tilings themselves, but on the perfectly random sampling technique due to Propp and Wilson known as “coupling from the past” (see references below) that was used to generate them. As is often the case here, I don’t have any new mathematics to contribute; the objective of this post is just to go beyond the pseudo-code descriptions of the technique in most of the literature, and provide working Python code with a couple of different example applications.

The basic problem is this: suppose that we want to sample from some probability distribution \pi over a finite discrete space S, and that we have an ergodic Markov chain whose steady state distribution is \pi. An approximate sampling procedure is to pick an arbitrary initial state, then just run the chain for some large number of iterations, with the final state as your sample. This idea has been discussed here before, in the context of card shuffling, the board game Monopoly, as well as the lozenge tilings from last month.

For example, consider shuffling a deck of cards using the following iterative procedure: select a random adjacent pair of cards, and flip a coin to decide whether to put them in ascending or descending order (assuming any convenient total ordering, e.g. first by rank, then by suit alphabetically). Repeat for some “large” number of iterations.

There are two problems with this approach:

  1. It’s approximate; the longer we iterate the chain, the closer we get to the steady state distribution… but (usually) no matter when we stop, the distribution of the resulting state is never exactly \pi.
  2. How many iterations are enough? For many Markov chains, the mixing time may be difficult to analyze or even estimate.

Coupling from the past

Coupling from the past can be used to address both of these problems. For a surprisingly large class of applications, it is possible to sample exactly from the stationary distribution of a chain, by iterating multiple realizations of the chain until they are “coupled,” without needing to know ahead of time when to stop.

First, suppose that we can express the random state transition behavior of the chain using a fixed, deterministic function f:S \times [0,1) \to S, so that for a random variable U uniformly distributed on the unit interval,

\forall i,j\ P(f(s_i, U)=s_j) = p_{i,j}

In other words, given a current state X_t and random draw U_t, the next state is X_{t+1}=f(X_t, U_t).

Now consider a single infinite sequence of random draws (u_1, u_2, u_3, \ldots). We will use this same single source of randomness to iterate multiple realizations of the chain, one for each of the |S| possible initial states… but starting in the past, and running forward to time zero. More precisely, let’s focus on two particular chains with different initial states s_i and s_j at time -n, so that

X_{-n}=s_i, Y_{-n}=s_j

X_{-(n-1)}=f(X_{-n}, u_n), Y_{-(n-1)}=f(Y_{-n}, u_n)

\cdots

X_{-1}=f(X_{-2}, u_2), Y_{-1}=f(Y_{-2}, u_2)

X_0=f(X_{-1}, u_1), Y_0=f(Y_{-1}, u_1)

(Notice how the more random draws we generate, the farther back in time they are used. More on this shortly.)

A key observation is that if X_t=Y_t at any point, then the chains are “coupled:” since they experience the same sequence of randomness influencing their behavior, they will continue to move in lockstep for all subsequent iterations as well, up to X_0=Y_0. Furthermore, if the states at time zero are all the same for all possible initial states run forward from time -n, then the distribution of that final state X_0 is the stationary distribution of the chain.

But what if all initial states don’t end at the same final state? This is where the single source of randomness is key: we can simply look farther back in the past, say 2n time steps instead of n, by extending our sequence of random draws… as long as we re-use the existing random draws to make the same “updates” to the states at later times (i.e., closer to the end time zero).

Monotone coupling

So all we need to do to perfectly sample from the stationary distribution is

  1. Choose a sufficiently large n to look far enough into the past.
  2. Run |S| realizations of the chain, one for each initial state, all starting at time -n and running up to time zero.
  3. As long as the final state X_0 is the same for all initial states, output X_0 as the sample. Otherwise, look farther back in the past (n \leftarrow 2n), generate additional random draws accordingly, and go back to step 2.

This doesn’t seem very helpful, since step 2 requires enumerating elements of the typically enormous state space. Fortunately, in many cases, including both lozenge tiling and card shuffling, it is possible to simplify the procedure by imposing a partial order \preceq on the state space– with minimum and maximum elements– that is preserved by the update function f. That is, suppose that for all states s_i,s_j \in S and random draws u, if s_i \preceq s_j, then f(s_i, u) \preceq f(s_j,u).

Then instead of running the chain for all possible initial states, we can just run two chains, starting with the minimum and maximum elements in the partial order. If those two chains are coupled by time zero, then they will also “squeeze” any other initial state into the same coupling.

Following is the resulting Python implementation:

import random

def monotone_cftp(mc):
    """Return stationary distribution sample using monotone CFTP."""
    updates = [mc.random_update()]
    while True:
        s0, s1 = mc.min_max()
        for u in updates:
            s0.update(u)
            s1.update(u)
        if s0 == s1:
            break
        updates = [mc.random_update()
                   for k in range(len(updates) + 1)] + updates
    return s0

Sort of like the Fisher-Yates shuffle, this is one of those algorithms that is potentially easy to implement incorrectly. In particular, note that the list of “updates,” which is traversed in order during iteration of the two chains, is prepended with additional blocks of random draws as needed to start farther back in the past. The figure below shows the resulting behavior, mapping each output of the random number generator to the time at which it is used to update the chains.

Random draws in the order they are generated, vs. the order in which they are used in state transitions.

Coming back once again to the card shuffling example, the code below uses coupling from the past with random adjacent transpositions to generate an exactly uniform random permutation p. (A similar example generating random lozenge tilings may be found at the usual location here.)

class Shuffle:
    def __init__(self, n, descending=False):
        self.p = list(range(n))
        if descending:
            self.p.reverse()

    def min_max(self):
        n = len(self.p)
        return Shuffle(n, False), Shuffle(n, True)

    def __eq__(self, other):
        return self.p == other.p

    def random_update(self):
        return (random.randint(0, len(self.p) - 2),
                random.randint(0, 1))

    def update(self, u):
        k, c = u
        if (c == 1) != (self.p[k] < self.p[k + 1]):
            self.p[k], self.p[k + 1] = self.p[k + 1], self.p[k]

print(monotone_cftp(Shuffle(52)).p)

The minimum and maximum elements of the partial order are the identity and reversal permutations, as might be expected; but it’s an interesting puzzle to determine the actual partial order that these random adjacent transpositions preserve. Consider, for example, a similar process where we select a random adjacent pair of cards, then flip a coin to decide whether to swap them or not (vs. whether to put them in ascending or descending order). This process has the same uniform stationary distribution, but won’t work when applied to monotone coupling from the past.

Re-generating vs. storing random draws

One final note: although the above implementation is relatively easy to read, it may be prohibitively expensive to store all of the random draws as they are generated. The slightly more complex implementation below is identical in output behavior, but is more space-efficient by only storing “markers” in the random stream, re-generating the draws themselves as needed when looking farther back into the past.

import random

def monotone_cftp(mc):
    """Return stationary distribution sample using monotone CFTP."""
    updates = [(random.getstate(), 1)]
    while True:
        s0, s1 = mc.min_max()
        rng_next = None
        for rng, steps in updates:
            random.setstate(rng)
            for t in range(steps):
                u = mc.random_update()
                s0.update(u)
                s1.update(u)
            if rng_next is None:
                rng_next = random.getstate()
        if s0 == s1:
            break
        updates.insert(0, (rng_next, 2 ** len(updates)))
    random.setstate(rng_next)
    return s0

 

References:

  1. Propp, J. and Wilson, D., Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics, Random Structures and Algorithms, 9 1996, p. 223-252 [PDF]
  2. Wilson, D., Mixing times of lozenge tiling and card shuffling Markov chains, Annals of Applied Probability, 14(1) 2004, p. 274-325 [PDF]
Posted in Uncategorized | 6 Comments

How high can you count in limited space?

Introduction

I recently saw several different checks, all from the same bank, each with a different dollar amount printed in a fixed-width font, similar to the shortened example below:

**PAY EXACTLY**twelve thousand thirty-four and 56/100****************

Counting asterisks confirmed that the amount fields on each check were all padded to exactly the same length… which made me wonder, what is the largest check that the bank could print, using English words for the dollar amount?

Actually, that’s not quite the question I had in mind. There are very short names for very large numbers, but it’s not very helpful to say that a bank can cut a check for, say, “one googol” dollars, when most smaller amounts would not fit in the same length of field.

So to make the problem less linguistic and more mathematical, let’s instead ask the following more precise question: what is the largest integer n(m) such that every dollar amount from zero to n(m) can be printed in words using at most m characters?

Converting integers into words

For short fields, such as m=67 in the case of the bank checks, brute force is sufficient; we just need a function to convert integers into words. This is a pretty common programming problem, with one Python implementation shown below, where integer_name(n) works for any |n|<10^{36}:

units = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven',
    'eight', 'nine', 'ten', 'eleven', 'twelve', 'thirteen', 'fourteen',
    'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen']
tens = ['zero', 'ten', 'twenty', 'thirty', 'forty', 'fifty', 'sixty',
    'seventy', 'eighty', 'ninety']
powers = ['zero', 'thousand', 'million', 'billion', 'trillion',
    'quadrillion', 'quintillion', 'sextillion', 'septillion', 'octillion',
    'nonillion', 'decillion']
hundred = 'hundred'
minus = 'minus'
comma = ','
and_ = ' and'
space = ' '
hyphen = '-'
empty = ''

def small_integer_name(n, use_and=False):
    s = empty
    if n >= 100:
        q, n = divmod(n, 100)
        s = units[q] + space + hundred + (
            (and_ if use_and else empty) + space if n > 0 else empty)
    if n >= 20:
        q, n = divmod(n, 10)
        s += tens[q] + (hyphen if n > 0 else empty)
    return (s + units[n] if n > 0 else s)

def integer_name(n, use_comma=False, use_and=False, power=0):
    if n < 0:
        return minus + space + integer_name(-n, use_comma, use_and)
    elif n == 0:
        return units[0]
    s = empty
    if n >= 1000:
        q, n = divmod(n, 1000)
        s = integer_name(q, use_comma, use_and, power + 1) + (
            (comma if use_comma else empty) + space if n > 0 else empty)
    return (s + small_integer_name(n, use_and) +
            (space + powers[power] if power > 0 else empty) if n > 0 else s)

There are a couple of things to note. First, although typical American style does not include commas separating thousands, or the word and between hundreds and tens (e.g., “one hundred and twenty-three”), it’s nice to have the option here, since it affects the calculation of n(m).

Also, “everything” is a variable, including the hyphen, comma, space, even the empty string. The addition operator is overloaded for string concatenation, but it’s easy to replace the string constants with, say, integer word counts, or syllable counts, or whatever, without changing the code. For example, we can automatically generate lousy haiku:

One hundred twenty-

one thousand seven hundred

seventy-seven

Coming back to the bank checks, it turns out that n(67)=1113322, so that the bank can print a check for any amount up to $1,113,322.99. Which doesn’t seem terribly large– I wonder if they fall back to using numerals if the amount doesn’t fit using words?

Twitter scale

This isn’t a new problem. A couple of years ago, FiveThirtyEight.com’s Riddler posed a problem involving the Twitter account @CountVonCount, of Sesame Street fame, essentially asking for n(139)… and then again asking for n(279) just last month, in response to the recent increase in maximum length of a tweet from 140 to 280 characters (in each case, less one character to account for an exclamation point).

These field widths are large enough that a brute force approach is no longer feasible. It’s a nice problem to implement a function to compute n(m) efficiently and exactly, with results as shown in the figure below.

How high can you count (on the y-axis) spelling each number in words using at most the number of characters on the x-axis?

All of the source code is available at the usual location here.

Posted in Uncategorized | 2 Comments

Proofs and theorems without words

Introduction

In the figure below, a regular hexagon with side length 12 is tiled with “lozenges,” sort of like triangular dominoes, each consisting of a pair of unit-length equilateral triangles joined at a common side.

Each lozenge is in one of three possible orientations. Although the orientations appear random, if we count carefully, we find that there are exactly the same number of lozenges in each of the three orientations. In fact, no matter how we tile the hexagon, it is always the case that the number of lozenges (144 in this case) of each “type” is fixed. Can you prove this?

Proof without words

Although the (proof of the) Pythagorean theorem seems to be the most common example of a proof without words— usually a picture or diagram that doesn’t require any words to explain– this problem, with its “proof” below, is probably my favorite.

One reason I like this problem, particularly as motivation for student discussion, is that it is somewhat controversial. Is the above “proof without words” really a satisfactory proof?

Theorem without words

Moving away from random tilings for the moment, the original motivation for this post wasn’t actually a proof without words, but a theorem without words. I recently learned an interesting result I had not seen before, involving no more than high school-level physics, that I thought could be presented as an animation without any accompanying explanation:

Unfortunately, I’m not sure this quite succeeds as a “theorem without words.” As with the tiling proof, there is more going on here than the above animation arguably conveys. In particular, one of the most interesting things about this problem– that the animation doesn’t really show– is that the shape (that is, the eccentricity) of the ellipse of apogees is invariant: it does not depend on the speed of the projectile, or gravity, or any relationship between the two.

More on random lozenge tilings

Finally, some source code: although there are plenty of papers and web sites with images of random tilings like those above, I wanted to make my own images, and working code is harder to find. See here for my Python implementation for generating a random lozenge tiling of a hexagon of a given size. For example, the following figure shows a random tiling of a hexagon with side length 64:

My implementation is quick and dirty in the sense that it simply iterates the Markov chain of single-step up-or-down moves in the corresponding family of non-intersecting lattice paths, essentially shuffling long enough for the resulting random tiling to be approximately uniform. (Note that the tilings in the above images are more random than they might appear, despite the “frozen” regions near the corners.) It would be interesting to extend this implementation to use coupling from the past, to generate an exactly uniform random tiling.

References:

  1. David, G. and Tomei, C., The Problem of the Calissons, American Mathematical Monthly, 96(5) May 1989, p. 429-431 [JSTOR]
  2. Wilson, D., Mixing Times of Lozenge Tiling and Card Shuffling Markov Chains, Annals of Applied Probability, 14(1) 2004, p. 274–325 [PDF]
Posted in Uncategorized | 1 Comment