Suppose that Alice and Bob play a game for a dollar: they roll a single six-sided die repeatedly, until either:

- Alice wins if they observe each of the six possible faces at least once, or
- Bob wins if they observe any one face six times.

Would you rather be Alice or Bob in this scenario? Or does it matter? You can play a similar game with a deck of playing cards: shuffle the deck, and deal one card at a time from the top of the deck, with Alice winning when all four suits are dealt, and Bob winning when any particular suit is dealt four times. (This version has the advantage of providing a built-in record of the history of deals, in case of argument over who actually wins.) Again, would you rather be Alice or Bob?

It turns out that Alice has a distinct advantage in both games, winning nearly three times more often than Bob in the dice version, and nearly twice as often in the card version. The objective of this post is to describe some interesting mathematics involved in these games, and relate them to the game of Bingo, where a similar phenomenon is described in a recent *Math Horizons* article (see reference below): the winning Bingo card among multiple players is much more likely to have a *horizontal* bingo (all numbers in some row) than vertical (all numbers in some column).

**Bingo with a single card**

First, let’s describe how Bingo works with just one player. A Bingo card is a 5-by-5 grid of numbers, with each column containing 5 numbers randomly selected without replacement from 15 possibilities: the first “B” column is randomly selected from the numbers 1-15, the second “I” column is selected from 16-30, the third column from 31-45, the fourth column from 46-60, and the fifth “O” column from 61-75. An example is shown below.

A “caller” randomly draws, without replacement, from a pool of balls numbered 1 through 75, calling each number in turn as it is drawn, with the player marking the called number if it is present on his or her card. The player wins by marking all 5 squares in any row, column, or diagonal. (One minor wrinkle in this setup is that, in typical American-style Bingo, the center square is “free,” considered to be already marked before any numbers are called.)

It will be useful to generalize this setup with parameters , where each card is with each column selected from possible values, so that standard Bingo corresponds to .

We can compute the probability distribution of the number of draws required for a single card to win. Bill Butler describes one approach, enumerating the possible partially-marked cards and computing the probability of at least one bingo for each such marking.

Alternatively, we can use inclusion-exclusion to compute the cumulative distribution directly, by enumerating just the possible combinations of horizontal, vertical, and diagonal bingos (of which there are 5, 5, and 2, respectively) on a card with marks. In Mathematica:

bingoSets[n_, free_: True, diag_: True] := Module[{card = Partition[Range[n^2], n]}, If[free, card[[Ceiling[n/2], Ceiling[n/2]]] = 0]; DeleteCases[Join[ card, Transpose[card], If[diag, {Diagonal[card], Diagonal[Reverse[card]]}, {}]], 0, Infinity]] bingoCDF[k_, nm_, bingos_] := Module[{j}, 1 - Total@Map[ (j = Length[Union @@ #]; (-1)^Length[#] Binomial[nm - j, k - j]) &, Subsets[bingos]]/Binomial[nm, k]] bingos = bingoSets[5, False, True]; cdf = Table[bingoCDF[k, 75, bingos], {k, 1, 75}];

(Note the optional arguments specifying whether the center square is free, and whether diagonal bingo is allowed. It will be convenient shortly to consider a simplified version of the game, where these two “special” rules are discarded.)

The following figure shows the resulting distribution, with the expected number of 43.546 draws shown in red.

**Independence with 2 or more cards**

Before getting to the “horizontal is more likely than vertical” phenomenon, it’s worth pointing out another non-intuitive aspect of Bingo. If instead of just a single card, we have a game with multiple players, possibly with *thousands* of different cards, what is the distribution of number of draws until someone wins?

If is the cumulative distribution for a *single* card as computed above, then since each of multiple cards is randomly– and *independently–* “generated,” intuitively it seems like the probability of at least one winning bingo among cards in at most draws should be given by

Butler uses exactly this approach. However, this is incorrect; although the *values* in the squares of multiple cards are independent, the presence or absence of winning bingos are not. Perhaps the best way to see this is to consider a “smaller” simplified version of the game, with , so that there are only four equally likely possible distinct cards:

Let’s further simplify the game so that only horizontal and vertical bingos are allowed, with no diagonals. Then the game must end after either two or three draws: with a single card, it ends in two draws with probability 2/3. However, with two cards, the probability of a bingo in two draws is 5/6, not 1-(1-2/3)^2=8/9.

**Horizontal bingos with many cards**

Finally, let’s come back to the initial problem: suppose that there are a large number of players, with so many cards in play that we are effectively guaranteed a winner as soon as either:

- At least one number from each of the column groups is drawn, resulting in a horizontal bingo on some card; or
- At least of possible numbers is drawn from any one particular column group, resulting in a vertical bingo on some card.

(Let’s ignore the free square and diagonal bingos for now; the former is easily handled but unnecessarily complicates the analysis, while the latter would mean that (1) and (2) are not mutually exclusive.)

Then the interesting observation is that a horizontal bingo (1) is over three times more likely to occur than a vertical bingo (2). Furthermore, this setup– Bingo with a large number of cards– is effectively the same as the card and dice games described in the introduction: Bingo is , the card game is , and the dice version is effectively .

The *Math Horizons* article referenced below describes an approach to calculating these probabilities, which involves enumerating integer partitions. However, this problem is ready-made for generating functions, which takes care of the partition house-keeping for us: let’s define

so that, for example, for Bingo with no free square,

Intuitively, each factor corresponds to a column, where each coefficient of indicates the number of ways to draw exactly numbers from that column (with some minimum number from each column specified by ). The overall coefficient of indicates the number of ways to draw numbers in total, *with neither a horizontal nor vertical bingo*.

Then using the notation from the article, the probability $P(H_k)$ of a horizontal bingo on *exactly* the -th draw is

and the probability $P(V_k)$ of a vertical bingo on exactly the -th draw is

The summation

over all possible numbers of draws yields the overall probability of about 0.752 that a horizontal bingo is observed before a vertical one. Similarly, for the card game with , the probability that Alice wins is 22543417/34165005, or about 0.66. For the dice game– which requires a slight modification to the above formulation, left as an exercise for the reader– Alice wins with probability about 0.747.

**Reference:**

- Benjamin, A., Kisenwether, J., and Weiss, B., The Bingo Paradox,
*Math Horizons*,**25**(1) September 2017, p. 18-21 [PDF]

]]>

This post was initially motivated by an interesting recent article by Chris Wellons discussing the Blowfish cipher. The Blowfish cipher’s subkeys are initialized with values containing the first 8336 hexadecimal digits of , the idea being that implementers may compute these digits for themselves, rather than trusting the integrity of explicitly provided “random” values.

So, how do we compute hexadecimal– or decimal, for that matter– digits of ? This post describes several methods for computing digits of and other well-known constants, as well as some implementation issues and open questions that I encountered along the way.

**Pi is easy with POSIX**

First, Chris’s implementation of the Blowfish cipher includes a script to automatically generate the code defining the subkeys. The following two lines do most of the work:

cmd='obase=16; scale=10040; a(1) * 4' pi="$(echo "$cmd" | bc -l | tr -d '\n\\' | tail -c+3)"

This computes base 16 digits of as `a(1) * 4`

, or 4 times the arctangent of 1 (i.e., ), using the POSIX arbitrary-precision calculator bc. Simple, neat, end of story.

How might we do the same thing on Windows? There are plenty of approximation formulas and algorithms for computing digits of , but to more precisely specify the requirements I was interested in, is there an algorithm that generates digits of :

- in any given base,
- one digit at a time “indefinitely,” i.e., without committing to a fixed precision ahead of time,
- with a relatively simple implementation,
- using only arbitrary-precision
*integer*arithmetic (such as is built into Python, or maybe C++ with a library)?

**Bailey-Borwein-Plouffe**

The Bailey-Borwein-Plouffe (BBP) formula seems ready-made for our purpose:

This formula has the nice property that it may be used to efficiently compute the -th hexadecimal digit of , without having to compute all of the previous digits along the way. Roughly, the approach is to multiple everything by , then use modular exponentiation to collect and discard the integer part of the sum, leaving the fractional part with enough precision to accurately extract the -th digit.

However, getting the implementation details right can be tricky. For example, this site provides source code and example data containing one million hexadecimal digits of generated using the BBP formula… but roughly one out of every 55 digits or so is incorrect.

But suppose that we don’t want to “look ahead,” but instead want to generate *all* hexadecimal digits of , one after the other from the beginning. Can we still make use of this formula in a simpler way? For example, consider the following Python implementation:

def pi_bbp(): """Conjectured BBP generator of hex digits of pi.""" a, b = 0, 1 k = 0 while True: ak, bk = (120 * k**2 + 151 * k + 47, 512 * k**4 + 1024 * k**3 + 712 * k**2 + 194 * k + 15) a, b = (16 * a * bk + ak * b, b * bk) digit, a = divmod(a, b) yield digit k = k + 1 for digit in pi_bbp(): print('{:x}'.format(digit), end='')

The idea is similar to converting a fraction to a string in a given base: multiply by the base (16 in this case), extract the next digit as the integer part, then repeat with the remaining fractional part. Here is the running fractional part, and is the current term in the BBP summation. (Using the `fractions`

module doesn’t significantly improve readability, and is *much* slower than managing the numerators and denominators directly.)

Now for the interesting part: although this implementation appears to behave correctly– at least for the first 500,000 digits where I stopped testing– it isn’t clear to me that it is *always* correct. That is, I don’t see how to prove that this algorithm will continue to generate correct hexadecimal digits of indefinitely. Perhaps a reader can enlighten me.

(Initial thoughts: Since it’s relatively easy to show that each term in the summation is positive, I think it would suffice to prove that the algorithm never generates an “invalid” hexadecimal digit that is greater than 15. But I don’t see how to do this, either.)

Interestingly, Bailey *et. al.* conjecture (see Reference 1 below) a similar algorithm that they have verified out to 10 million hexadecimal digits. The algorithm involves a strangely similar-but-slightly-different approach and formula:

def pi_bbmw(): """Conjectured BBMW generator of hex digits of pi.""" a, b = 0, 1 k = 0 yield 3 while True: k = k + 1 ak, bk = (120 * k**2 - 89 * k + 16, 512 * k**4 - 1024 * k**3 + 712 * k**2 - 206 * k + 21) a, b = (16 * a * bk + ak * b, b * bk) a = a % b yield 16 * a // b

Unfortunately, this algorithm is slower, requiring one more expensive arbitrary-precision division operation per digit than the BBP version.

**Proven algorithms**

Although the above two algorithms are certainly short and sweet, (1) they only work for generating hexadecimal digits (vs. decimal, for example), and (2) we don’t actually know if they are correct. Fortunately, there are other options.

Gibbons (Reference 2) describes an algorithm that is not only proven correct, but works for generating digits of in any base:

def pi_gibbons(base=10): """Gibbons spigot generator of digits of pi in given base.""" q, r, t, k, n, l = 1, 0, 1, 1, 3, 3 while True: if 4 * q + r - t < n * t: yield n q, r, t, k, n, l = (base * q, base * (r - n * t), t, k, (base * (3 * q + r)) // t - base * n, l) else: q, r, t, k, n, l = (q * k, (2 * q + r) * l, t * l, k + 1, (q * (7 * k + 2) + r * l) // (t * l), l + 2)

The bad news is that this is by far the slowest algorithm that I investigated, nearly an order of magnitude slower than BBP on my laptop.

The good news is that there is at least one other algorithm, that is not only competitive with BBP in terms of throughput, but is also general enough to easily compute– in any base– not just the digits of , but also (the base of the natural logarithm), (the golden ratio), , and others.

The idea is to express the desired value as a generalized continued fraction:

where in particular may be represented as

Then digits may be extracted similarly to the BBP algorithm above: iteratively refine the *convergent* (i.e., approximation) of the continued fraction until the integer part doesn’t change; extract this integer part as the next digit, then multiply the remaining fractional part by the base and continue. In Python:

def continued_fraction(a, b, base=10): """Generate digits of continued fraction a(0)+b(1)/(a(1)+b(2)/(...).""" (p0, q0), (p1, q1) = (a(0), 1), (a(1) * a(0) + b(1), a(1)) k = 1 while True: (d0, r0), (d1, r1) = divmod(p0, q0), divmod(p1, q1) if d0 == d1: yield d1 p0, p1 = base * r0, base * r1 else: k = k + 1 x, y = a(k), b(k) (p0, q0), (p1, q1) = (p1, q1), (x * p1 + y * p0, x * q1 + y * q0)

This approach is handy because not only , but other common constants as well, have generalized continued fraction representations in which the sequences are “nice.” To generate decimal digits of :

for digit in continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1, lambda k: 4 if k == 1 else (k - 1)**2, 10): print(digit, end='')

Or to generate digits of the golden ratio :

for digit in continued_fraction(lambda k: 1, lambda k: 1, 10): print(digit, end='')

**Consuming blocks of a generator**

Finally, once I got around to actually using the above algorithm to try to reproduce Chris’s original code generation script, I accidentally injected a bug that took some thought to figure out. Recall that the Blowfish cipher has a couple of (sets of) subkeys, each populated with a segment of the sequence of hexadecimal digits of . So we would like to extract a block of digits, do something with it, then extract a *subsequent* block of digits, do something else, etc.

A simple way to do this in Python is with the built-in zip function, that takes multiple iterables as arguments, and returns a single generator that outputs tuples of elements from each of the inputs… and “truncates” to the length of the shortest input. In this case, to extract a fixed number of digits of , we just `zip`

the “infinite” digit generator together with a range of the desired length.

To more clearly see what happens, let’s simplify the context a bit and just try to print the first 10 *decimal* digits of in two groups of 5:

digits = continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1, lambda k: 4 if k == 1 else (k - 1)**2, 10) for digit, k in zip(digits, range(5)): print(digit, end='') print() for digit, k in zip(digits, range(5)): print(digit, end='')

This doesn’t work: the resulting output blocks are (31415) and (26535)… but . We “lost” the 9 in the middle.

The problem is that `zip`

evaluates each input iterator in turn, stopping only when one of them is exhausted. In this case, during the 6th iteration of the first loop, we “eat” the 9 from the digit generator *before* we realize that the `range`

iterator is exhausted. When we continue to the second block of 5 digits, we can’t “put back” the 9.

This is easy to fix: just reverse the order of the `zip`

arguments, so the `range`

is exhausted first, *before* eating the extra element of the “real” sequence we’re extracting from.

for k, digit in zip(range(5), digits): print(digit, end='') print() for k, digit in zip(range(5), digits): print(digit, end='')

This works as desired, with output blocks (31415) and (92653).

**References:**

- Bailey, D., Borwein, J., Mattingly, A., Wightwick, G., The Computation of Previously Inaccessible Digits of and Catalan’s Constant,
*Notices of the American Mathematical Society*,**60**(7) 2013, p. 844-854 [PDF] - Gibbons, J., Unbounded Spigot Algorithms for the Digits of Pi,
*American Mathematical Monthly*,**113**(4) April 2006, p. 318-328 [PDF]

]]>

A common development approach in MATLAB is to:

- Write MATLAB code until it’s unacceptably slow.
- Replace the slowest code with a C++ implementation called via MATLAB’s MEX interface.
- Goto step 1.

Regression testing the faster MEX implementation against the slower original MATLAB can be difficult. Even a seemingly line-for-line, “direct” translation from MATLAB to C++, when provided with exactly the same numeric inputs, executed on the same machine, with the same default floating-point rounding mode, can still yield drastically different output. How does this happen?

There are three primary causes of such differences, none of them easy to fix. The purpose of this post is just to describe these causes, focusing on one in particular that I learned occurs more frequently than I realized.

**1. The butterfly effect**

This is where the *drastically* different results typically come from. Even if the *inputs* to the MATLAB and MEX implementations are identical, suppose that just one *intermediate* calculation yields even the smallest possible difference in its result… and is followed by a long sequence of *further* calculations using that result. That small initial difference can be greatly magnified in the final output, due to cumulative effects of rounding error. For example:

x = 0.1; for k in 1:100 x = 4 * (1 - x); end % x == 0.37244749676375793 double x = 0.10000000000000002; for (int k = 1; k <= 100; ++k) { x = 4 * (1 - x); } // x == 0.5453779481420313

This example is only contrived in its simplicity, exaggerating the “speed” of divergence with just a few hundred floating-point operations. Consider a more realistic, more complex Monte Carlo simulation of, say, an aircraft autopilot maneuvering in response to simulated sensor inputs. In a *particular* Monte Carlo iteration, the original MATLAB implementation might successfully dampen a severe Dutch roll, while the MEX version might result in a crash (of the aircraft, I mean).

Of course, for this divergence of behavior to occur at all, there must be that *first* difference in the result of an intermediate calculation. So this “butterfly effect” really is just an *effect*— it’s not a cause at all, just a magnified symptom of the two real causes, described below.

**2. Compiler non-determinism**

As far as I know, the MATLAB interpreter is pretty well-behaved and predictable, in the sense that MATLAB source code explicitly specifies the order of arithmetic operations, and the precision with which they are executed. A C++ compiler, on the other hand, has a lot of freedom in how it translates source code into the corresponding sequence of arithmetic operations… and possibly even the precision with which they are executed.

Even if we assume that all operations are carried out in double precision, order of operations can matter; the problem is that this order is not explicitly controlled by the C++ source code being compiled (*edit*: at least for some speed-optimizing compiler settings). For example, when a compiler encounters the statement `double x = a+b+c;`

, it could emit code to effectively calculate , or , which do not necessarily produce the same result. That is, double-precision addition is not associative:

(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3) % this is false

Worse, explicit parentheses in the source code *may* help, but it doesn’t *have* to.

Another possible problem is intermediate precision. For example, in the process of computing , the intermediate result might be computed in, say, 80-bit precision, before clamping the final sum to 64-bit double precision. This has bitten me in other ways discussed here before; Bruce Dawson has several interesting articles with much more detail on this and other issues with floating-point arithmetic.

**3. Transcendental functions**

So suppose that we are comparing the results of MATLAB and C++ implementations of the same algorithm. We have verified that the numeric inputs are identical, and we have also somehow verified that the arithmetic operations specified by the algorithm are executed in the same order, all in double precision, in both implementations. Yet the output *still* differs between the two.

Another possible– in fact likely– cause of such differences is in the implementation of transcendental functions such as `sin`

, `cos`

, `atan2`

, `exp`

, etc., which are not required by IEEE-754-2008 to be correctly rounded due to the table maker’s dilemma. For example, following is the first actual instance of this problem that I encountered years ago, reproduced here in MATLAB R2017a (on my Windows laptop):

x = 193513.887169782; y = 44414.97148164646; atan2(y, x) == 0.2256108075334872

while the corresponding C++ implementation (still called from MATLAB, built as a MEX function using Microsoft Visual Studio 2015) yields

#include <cmath> ... std::atan2(y, x) == 0.22561080753348722;

The two results differ by an ulp, with (in this case) the MATLAB version being correctly rounded.

(*Rant*: Note that *both* of the above values, despite actually being different, display as the same 0.225610807533487 in the MATLAB command window, which for some reason neglects to provide “round-trip” representations of its native floating-point data type. See here for a function that I find handy when troubleshooting issues like this.)

What I found surprising, after recently exploring this issue in more detail, is that the above example is not an edge case: the MATLAB and C++ `cmath`

implementations of the trigonometric and exponential functions disagree quite frequently– and furthermore, the above example notwithstanding, the C++ implementation tends to be more accurate more of the time, significantly so in the case of `atan2`

and the exponential functions, as the following figure shows.

The setup: on my Windows laptop, with MATLAB R2017a and Microsoft Visual Studio 2015 (with code compiled from MATLAB as a MEX function with the provided default compiler configuration file), I compared each function’s output for 1 million randomly generated inputs– or pairs of inputs in the case of `atan2`

— in the unit interval.

Of those 1 million inputs, I calculated how many yielded different output from MATLAB and C++, where the differences fell into 3 categories:

- Red indicates that MATLAB produced the correctly rounded result, with the exact value
*between*the MATLAB and C++ outputs (i.e., both implementations had an error less than an ulp). - Gray indicates that C++ produced the correctly rounded result, with both implementations having an error of less than an ulp.
- Blue indicates that C++ produced the correctly rounded result,
*between*the exact value and the MATLAB output (i.e., MATLAB had an error*greater*than an ulp).

(A few additional notes on test setup: first, I used random inputs in the unit interval, instead of evenly-spaced values with domains specific to each function, to ensure testing all of the mantissa bits in the input, while still allowing some variation in the exponent. Second, I also tested addition, subtraction, multiplication, division, and square root, just for completeness, and verified that they agree exactly 100% of the time, as guaranteed by IEEE-754-2008… at least for *one* such operation in isolation, eliminating any compiler non-determinism mentioned earlier. And finally, I also tested against MEX-compiled C++ using MinGW-w64, with results identical to MSVC++.)

For most of these functions, the inputs where MATLAB and C++ differ are distributed across the entire unit interval. The two-argument form of the arctangent is particularly interesting. The following figure “zooms in” on the 16.9597% of the sample inputs that yielded different outputs, showing a scatterplot of those inputs using the same color coding as above. (The red, where MATLAB provides the correctly rounded result, is so infrequent it is barely visible in the summary bar chart.)

**Conclusion**

This might all seem rather grim. It certainly does seem hopeless to expect that a C++ translation of even modestly complex MATLAB code will preserve exact agreement of output for all inputs. (In fact, it’s even worse than this. Even the same MATLAB code, executed in the same version of MATLAB, but on two different machines with different architectures, will not necessarily produce identical results given identical inputs.)

But exact agreement between different implementations is rarely the “real” requirement. Recall the Monte Carlo simulation example described above. If we run, say, 1000 iterations of a simulation in MATLAB, and the same 1000 iterations with a MEX translation, it may be that iteration #137 yields different output in the two versions. But that *should* be okay… as long as the *distribution* over all 1000 outputs is the same– or sufficiently similar– in both cases.

]]>

switch (-1&3) { case 1: ... case 2: ... case 3: ... ... }

What does this code do? This is interesting because the `switch`

expression is a constant that could be evaluated at compile time (indeed, this could just as well have been implemented with a series of `#if/#elseif`

preprocessor directives instead of a `switch-case`

statement).

As usual, it seems more fun to present this as a puzzle, rather than just point and say, “This is what I did.” For context, or possibly as a hint, this code was part of a task involving parsing and analyzing digital terrain elevation data (DTED), where it makes at least some sense.

]]>

Many theorems in mathematics are of the form, “ if and only if ,” where and are logical propositions that may be true or false. For example:

**Theorem 1:** An integer is even if and only if is even.

where in this case is “ is even” and is “ is even.” The statement of the theorem may itself be viewed as a proposition , where the logical connective is read “if and only if,” and behaves like Boolean equality. Intuitively, states that “ and are (materially) *equivalent*; they have the same truth value, either both true or both false.”

(Think Boolean expressions in your favorite programming language; for example, the proposition , read “ and ,” looks like `p && q`

in C++, assuming that `p`

and `q`

are of type `bool`

. Similarly, the proposition looks like `p == q`

in C++.)

Now consider extending this idea to the equivalence of more than just *two* propositions. For example:

**Theorem 2:** Let be an integer. Then the following are equivalent:

- is even.
- is even.
- is even.

The idea is that the three propositions above (let’s call them ) always have the same truth value; either all three are true, or all three are false.

So far, so good. The problem arises when Rosen expresses this general idea of equivalence of multiple propositions as

**Puzzle:** What does this expression mean? A first concern might be that we need parentheses to eliminate any ambiguity. But almost unfortunately, it can be shown that the connective is associative, meaning that this is a perfectly well-formed propositional formula even without parentheses. The problem is that it doesn’t mean what it *looks* like it means.

**Reference:**

- Rosen, K. H. (2011).
*Discrete Mathematics and Its Applications*(7th ed.). New York, NY: McGraw-Hill. ISBN-13: 978-0073383095

]]>

Suppose that I put 6 identical dice in a cup, and roll them simultaneously (as in Farkle, for example). Then you take those same 6 dice, and roll them all again. What is the probability that we both observe the same outcome? For example, we may both roll one of each possible value (1-2-3-4-5-6, but not necessarily in order), or we may both roll three 3s and three 6s, etc.

I like this problem as an “extra” for combinatorics students learning about generating functions. A numeric solution likely requires some programming (but I’ve been wrong about that here before), but implementation is not overly complex, while being slightly beyond the “usual” type of homework problem in the construction of its solution.

]]>

Consider the following problem: given a finite set of positive integers, arrange them so that the concatenation of their decimal representations is as small as possible. For example, given the numbers {1, 10, 12}, the arrangement (10, 1, 12) yields the minimum possible value 10112.

I saw a variant of this problem in a recent Reddit post, where it was presented as an “easy” programming challenge, referring in turn to a blog post by Santiago Valdarrama describing it as one of “Five programming problems every software engineer should be able to solve in 1 hour.”

I think the problem is interesting in that it *seems* simple and intuitive– and indeed it does have a solution with a relatively straightforward implementation– but there are also several “intuitive” approaches that *don’t* work… and even for the correct implementation, there is some subtlety involved in *proving* that it really works.

**Brute force**

First, the following Python 3 implementation simply tries all possible arrangements, returning the lexicographically smallest:

import itertools def min_cat(num_strs): return min(''.join(s) for s in itertools.permutations(num_strs))

(*Aside*: for convenience in the following discussion, all inputs are assumed to be a list of *strings* of decimal representations of positive integers, rather than the integers themselves. This lends some brevity to the code, without adding to or detracting from the complexity of the algorithms.)

This implementation works because every concatenated arrangement has the same length, so a lexicographic comparison is equivalent to comparing the corresponding numeric values. It’s unacceptably inefficient, though, since we have to consider all possible arrangements of inputs.

**Sorting**

We can do better, by sorting the inputs in non-decreasing order, and concatenating the result. But this is where the problem gets tricky: what order relation should we use?

We can’t just use the natural ordering on the integers; using the same earlier example, the sorted arrangement (1, 10, 12) yields 11012, which is larger than the minimum 10112. Similarly, the sorted arrangement (2, 11) yields 211, which is larger than the minimum 112.

We can’t use the natural lexicographic ordering on strings, either; the initial example (1, 10, 12) fails again here.

The complexity arises because the numbers in a given input set may have different lengths, i.e. numbers of digits. If all of the numbers were guaranteed to have the same number of digits, then the numeric and lexicographic orderings are the *same*, and both yield the correct solution. Several users in the Reddit thread, and even Valdarrama, propose “padding” each input in various ways before sorting to address this, but this is also tricky to get right. For example, how should the inputs {12, 121} be padded so that a natural lexicographic ordering yields the correct minimum value 12112?

There *is* a way to do this, which I’ll leave as an exercise for the reader. Instead, consider the following solution (still Python 3):

import functools def cmp(x, y): return int(x + y) - int(y + x) def min_cat(num_strs): return ''.join(sorted(num_strs, key=functools.cmp_to_key(cmp)))

There are several interesting things going on here. First, a Python-specific wrinkle: we need to specify the order relation by which to sort. This actually would have looked slightly *simpler* in the older Python 2.7, where you could specify a binary comparison function directly. In Python 3, you can only provide a unary key function to apply to each element in the list, and sort by that. It’s an interesting exercise in itself to work out how to “convert” a comparison function into the corresponding key function; here we lean on the built-in `functools.cmp_to_key`

to do it for us. (This idea of specifying an order relation by a natural comparison without a corresponding natural key has been discussed here before, in the context of Reddit’s comment ranking algorithm.)

Second, recall that the input `num_strs`

is a list of *strings*, not integers, so in the implementation of the comparison `cmp(x, y)`

, the arguments are strings, and the addition operators are concatenation. The comparison function returns a negative value if the concatenation , interpreted as an integer, is less than , zero if they are equal, or a positive value if is greater than . The intended effect is to sort according to the relation defined as .

**It works… but should it?**

This implementation has a nice intuitive justification: suppose that the entire input list contained just two strings and . Then the comparison function effectively realizes the “brute force” evaluation of the two possible arrangements and .

However, that same intuitive reasoning becomes dangerous as soon as we consider input lists with more than two elements. That comparison function should bother us, for several reasons:

First, it’s not obvious that the resulting sorted ordering is even *well-defined*. That is, is the order relation a strict weak ordering of the set of (decimal string representations of) positive integers? It certainly isn’t a *total* ordering, since distinct values can compare as “equal:” for example, consider (1, 11), or (123, 123123), etc.

Second, even assuming the comparison function *does* realize a strict weak ordering (we’ll prove this shortly), that ordering has some interesting properties. For example, unlike the natural ordering on the positive integers, there is no smallest element. That is, for any , we can always find another strictly lesser (as a simple example, note that , e.g., ). Also unlike the natural ordering on the positive integers, this ordering is dense; given any pair , we can always find a third value in between, i.e., .

Finally, and perhaps most disturbingly, observe that a swap-based sorting algorithm will not necessarily make “monotonic” progress toward the solution: swapping elements that are “out of order” in terms of the comparison function may not always improve the overall situation. For example, consider the partially-sorted list (12, 345, 1), whose concatenation yields 123451. The comparison function indicates that 12 and 1 are “out of order” (121>112), but swapping them makes things worse: the concatenation of (1, 345, 12) yields the larger value 134512.

**Proof of correctness**

Given all of this perhaps non-intuitive weirdness, it seems worth being more rigorous in proving that the above implementation actually does work. We do this in two steps:

**Theorem 1**: The relation defined by the comparison function `cmp`

is a strict weak ordering.

*Proof*: Irreflexivity follows from the definition. To show transitivity, let be positive integers with digits, respectively, with and . Then

and

Thus,

i.e., . Incomparability of and corresponds to ; this is an equivalence relation, with reflexivity and symmetry following from the definition, and transitivity shown exactly as above (with equality in place of inequality).

**Theorem 2**: Concatenating positive integers sorted by yields the minimum value among all possible arrangements.

*Proof*: Let be the concatenation of an arrangement of positive integers with minimum value, and suppose that it is not ordered by , i.e., for some . Then the concatenation is strictly smaller, a contradiction.

(Note that this argument is only “simple” because and are *adjacent*. As mentioned above, swapping non-adjacent elements that are out of order may not in general decrease the overall value.)

]]>

**Instructions**

Let be the measure of the *spring angle*, i.e., the angle made by the flat back side of the crown molding with the wall (typically 38 or 45 degrees). Let be the measure of the *wall angle* (e.g., 90 degrees for an inside corner, 270 degrees for an outside corner, etc.).

To cut the piece on the *left-hand* wall (facing the corner), set the bevel angle and miter angle to

where positive angles are to the right (i.e., positive miter angle is counter-clockwise). Cut with the *ceiling* contact edge against the fence, and the finished piece on the *left* side of the blade.

To cut the piece on the *right-hand* wall (facing the corner), reverse the miter angle,

and cut with the *wall* contact edge against the fence, and the finished piece still on the *left* side of the blade.

**Derivation**

Let’s start by focusing on the crown molding piece on the left-hand wall as we face the corner. Consider a coordinate frame with the ceiling corner at the origin, the positive *x*-axis running along the crown molding to be cut, the negative *z*-axis running down to the floor, and the *y*-axis completing the right-handed frame, as shown in the figure below. In this example of an inside 90-degree corner, the positive *y*-axis runs along the opposite wall.

The desired axis of rotation of the saw blade is normal to the triangular cross section at the corner, which may be computed as the cross product of unit vectors from the origin to the vertices of this cross section:

To cut with the back of the crown molding flat on the saw table (the *xz*-plane), with the ceiling contact edge against the fence (the *xy*-plane), rotate this vector by angle about the *x*-axis:

It remains to compute the bevel and miter rotations that transform the axis of rotation of the saw blade from its initial to . With the finished piece on the left side of the blade, the bevel is a rotation by angle about the *z*-axis, followed by the miter rotation by angle about the *y*-axis:

Solving yields the bevel and miter angles above. For the crown molding piece on the *right-hand* wall, we can simply change the sign of both and , assuming that the *wall* contact edge is against the fence (still with the finished piece on the left side of the blade). The result is no change to the bevel angle, and a sign change in the miter angle.

]]>

Right. Now consider the following iterative version of the game: as each player’s stick emerges from under the bridge, he retrieves it, then runs back to the upstream side of the bridge and drops the stick again. Both players continue in this way, until one player’s stick “laps” the other, having emerged from under the bridge for the -th time before the other player’s stick has emerged times.

Let’s make this more precise by modeling the game as a discrete event simulation with positive integer parameters . Both players start at time 0 by simultaneously dropping their sticks, each of which emerges from under the bridge an integer number of seconds later, independently and uniformly distributed between and (inclusive). The river is random, but the players are otherwise evenly matched: each player then takes a constant seconds to recover his stick from the water, run back to the upstream side of the bridge, and drop the stick again.

Suppose, for example, that . If the game ends at the instant the winner’s stick emerges from under the bridge having first lapped the other player’s stick, then what is the expected time to complete a game of Iterative Poohsticks?

I think this is a great problem. As is often the case here, it’s not only an interesting mathematical problem to calculate the *exact* expected number of seconds to complete the game, but in addition, this game can even be tricky to *simulate* correctly, as a means of approximating the solution. The potential trickiness stems from an ambiguity in the description of how the game ends: what happens if the leading player’s stick emerges from under the bridge for the -th time, at exactly the *same* time that the trailing player’s stick emerges for the -st time?

There are two possibilities. Under version A of the rules, the game continues, so that the leading player’s stick must emerge *strictly before* the trailing player’s stick. Under version B of the rules, the game ends there, so that the leading player’s stick need only emerge *as or before* the trailing player’s stick emerges in order to win.

(It’s interesting to consider which of these versions of the game is easier to simulate and/or analyze. I think version B admits a slightly cleaner *exact* solution, although my *simulation* of the game switches more easily between the two versions. For reference, the expected time to complete the game with the above parameters is about 309.911 seconds for version A, and about 290.014 seconds for version B.)

]]>

This post is part pedagogical rant, part discussion of a beautiful technique in combinatorics, both motivated by a recent exchange with a high school student, about an interesting dice game that seems to be a common introductory exercise in probability:

There are 12 horses, numbered 1 through 12, each initially at position 0 on a track. Play consists of a series of turns: on each turn, the teacher rolls two 6-sided dice, where the sum of the dice indicates which horse to advance one position along the track. The first horse to reach position wins the race.

At first glance, this seems like a nice exercise. Students quickly realize, for example, that horse #1 is a definite loser– the sum of two dice will never equal 1– and that horse #7 is the best bet to win the race, with the largest probability (1/6) of advancing on any given turn.

But what if a student asks, as this particular student did, “Okay, I can see how to calculate the distribution of probabilities of each horse advancing in a *single turn*, but what about the probabilities of each horse *winning the race*, as a function of the race length ?” This makes me question whether this is indeed such a great exercise, at least as part of an *introduction* to probability. What started as a fun game and engaging discussion has very naturally led to a significantly more challenging problem, whose solution is arguably beyond most students– and possibly many teachers as well– at the high school level.

I like this game anyway, and I imagine that I would likely use it if I were in a similar position. Although the methods involved in an *exact* solution might be inappropriate at this level, the game still lends itself nicely to investigation via Monte Carlo simulation, especially for students with a programming bent.

**Poissonization**

There *is* an exact solution, however, via several different approaches. This problem is essentially a variant of the coupon collector’s problem in disguise: if each box of cereal contains one of 12 different types of coupons, then if I buy boxes of cereal until I have of one type of coupon, what is the probability of stopping with each type of coupon? Here the horses are the coupon types, and the dice rolls are the boxes of cereal.

As in the coupon collector’s problem, it is helpful to modify the model of the horse race in a way that, at first glance, seems like unnecessary *additional* complexity: suppose that the dice rolls occur at times distributed according to a Poisson process with rate 1. Then the advances of each individual horse (that is, the subsets of dice rolls with each corresponding total) are also Poisson processes, each with rate equal to the probability of the corresponding dice roll.

Most importantly, these individual processes are *independent*, meaning that we can easily compute the probability of desired states of the horses’ positions on the track at a particular time, as the *product* of the individual probabilities for each horse. Integrating over all time yields the desired probability that horse wins the race:

Intuitively, horse advances on the final dice roll, after exactly previous advances, while each of the other horses has advanced at most times.

**Generating functions**

This “Poissonization” trick is not the only way to solve the problem, and in fact may be less suitable for implementation without a sufficiently powerful computer algebra system. Generating functions may also be used to “encode” the possible outcomes of dice rolls leading to victory for a particular horse, as follows:

where the probability that horse wins on the -st dice roll is times the coefficient of in . Adding up these probabilities for all possible yields the overall probability of winning. This boils down to simple polynomial multiplication and addition, allowing relatively straightforward implementation in Python, for example.

The results are shown in the following figure. Each curve corresponds to a race length, from in black– where the outcome is determined by a single dice roll– to in purple.

As intuition might suggest, the longer the race, the more likely the favored horse #7 is to win. This is true for *any* non-uniformity in the single-turn probability distribution. For a contrasting example, consider a race with just 6 horses, with each turn decided by a *single* die roll. This race is fair no matter how long it is; every horse always has the same probability of winning. But if the die is loaded, no matter how slightly, then the longer the race, the more advantage to the favorite.

]]>