Analysis of Bingo

Introduction

Suppose that Alice and Bob play a game for a dollar: they roll a single six-sided die repeatedly, until either:

  1. Alice wins if they observe each of the six possible faces at least once, or
  2. Bob wins if they observe any one face six times.

Would you rather be Alice or Bob in this scenario? Or does it matter? You can play a similar game with a deck of playing cards: shuffle the deck, and deal one card at a time from the top of the deck, with Alice winning when all four suits are dealt, and Bob winning when any particular suit is dealt four times. (This version has the advantage of providing a built-in record of the history of deals, in case of argument over who actually wins.) Again, would you rather be Alice or Bob?

It turns out that Alice has a distinct advantage in both games, winning nearly three times more often than Bob in the dice version, and nearly twice as often in the card version. The objective of this post is to describe some interesting mathematics involved in these games, and relate them to the game of Bingo, where a similar phenomenon is described in a recent Math Horizons article (see reference below): the winning Bingo card among multiple players is much more likely to have a horizontal bingo (all numbers in some row) than vertical (all numbers in some column).

Bingo with a single card

First, let’s describe how Bingo works with just one player. A Bingo card is a 5-by-5 grid of numbers, with each column containing 5 numbers randomly selected without replacement from 15 possibilities: the first “B” column is randomly selected from the numbers 1-15, the second “I” column is selected from 16-30, the third column from 31-45, the fourth column from 46-60, and the fifth “O” column from 61-75. An example is shown below.

Example of an American Bingo card.

A “caller” randomly draws, without replacement, from a pool of balls numbered 1 through 75, calling each number in turn as it is drawn, with the player marking the called number if it is present on his or her card. The player wins by marking all 5 squares in any row, column, or diagonal. (One minor wrinkle in this setup is that, in typical American-style Bingo, the center square is “free,” considered to be already marked before any numbers are called.)

It will be useful to generalize this setup with parameters (n,m), where each card is n \times n with each column selected from m possible values, so that standard Bingo corresponds to (n,m)=(5,15).

We can compute the probability distribution of the number of draws required for a single card to win. Bill Butler describes one approach, enumerating the 2^{n^2-1}=2^{24} possible partially-marked cards and computing the probability of at least one bingo for each such marking.

Alternatively, we can use inclusion-exclusion to compute the cumulative distribution directly, by enumerating just the 2^{2n+2}=2^{12} possible combinations of horizontal, vertical, and diagonal bingos (of which there are 5, 5, and 2, respectively) on a card with k marks. In Mathematica:

bingoSets[n_, free_: True, diag_: True] :=
 Module[{card = Partition[Range[n^2], n]},
  If[free, card[[Ceiling[n/2], Ceiling[n/2]]] = 0];
  DeleteCases[Join[
    card, Transpose[card],
    If[diag, {Diagonal[card], Diagonal[Reverse[card]]}, {}]],
   0, Infinity]]

bingoCDF[k_, nm_, bingos_] :=
 Module[{j},
  1 - Total@Map[
      (j = Length[Union @@ #];
        (-1)^Length[#] Binomial[nm - j, k - j]) &,
      Subsets[bingos]]/Binomial[nm, k]]

bingos = bingoSets[5, False, True];
cdf = Table[bingoCDF[k, 75, bingos], {k, 1, 75}];

(Note the optional arguments specifying whether the center square is free, and whether diagonal bingo is allowed. It will be convenient shortly to consider a simplified version of the game, where these two “special” rules are discarded.)

The following figure shows the resulting distribution, with the expected number of 43.546 draws shown in red.

Cumulative probability distribution of a single-card bingo in at most the given number of draws. The average of 43.546 draws is shown in red.

Independence with 2 or more cards

Before getting to the “horizontal is more likely than vertical” phenomenon, it’s worth pointing out another non-intuitive aspect of Bingo. If instead of just a single card, we have a game with multiple players, possibly with thousands of different cards, what is the distribution of number of draws until someone wins?

If P_1(X \leq k) is the cumulative distribution for a single card as computed above, then since each of multiple cards is randomly– and independently– “generated,” intuitively it seems like the probability P_j(X \leq k) of at least one winning bingo among j cards in at most k draws should be given by

P_j(X \leq k) \stackrel{?}{=} 1-(1-P_1(X \leq k))^j

Butler uses exactly this approach. However, this is incorrect; although the values in the squares of multiple cards are independent, the presence or absence of winning bingos are not. Perhaps the best way to see this is to consider a “smaller” simplified version of the game, with (n,m)=(2,2), so that there are only four equally likely possible distinct cards:

The four possible cards in (2,2) Bingo.

Let’s further simplify the game so that only horizontal and vertical bingos are allowed, with no diagonals. Then the game must end after either two or three draws: with a single card, it ends in two draws with probability 2/3. However, with two cards, the probability of a bingo in two draws is 5/6, not 1-(1-2/3)^2=8/9.

Horizontal bingos with many cards

Finally, let’s come back to the initial problem: suppose that there are a large number of players, with so many cards in play that we are effectively guaranteed a winner as soon as either:

  1. At least one number from each of the n=5 column groups is drawn, resulting in a horizontal bingo on some card; or
  2. At least n=5 of m=15 possible numbers is drawn from any one particular column group, resulting in a vertical bingo on some card.

(Let’s ignore the free square and diagonal bingos for now; the former is easily handled but unnecessarily complicates the analysis, while the latter would mean that (1) and (2) are not mutually exclusive.)

Then the interesting observation is that a horizontal bingo (1) is over three times more likely to occur than a vertical bingo (2). Furthermore, this setup– Bingo with a large number of cards– is effectively the same as the card and dice games described in the introduction: Bingo is (n,m)=(5,15), the card game is (4,13), and the dice version is effectively (6,\infty).

The Math Horizons article referenced below describes an approach to calculating these probabilities, which involves enumerating integer partitions. However, this problem is ready-made for generating functions, which takes care of the partition house-keeping for us: let’s define

g_a(x) = \left(\sum_{j=a}^{n-1} {m \choose j}x^j\right)^{n-1}

so that, for example, for Bingo with no free square,

g_1(x) = \left({15 \choose 1}x^1 + {15 \choose 2}x^2 + {15 \choose 3}x^3 + {15 \choose 4}x^4\right)^4

Intuitively, each factor corresponds to a column, where each coefficient of x^j indicates the number of ways to draw exactly j numbers from that column (with some minimum number from each column specified by a). The overall coefficient of x^k indicates the number of ways to draw k numbers in total, with neither a horizontal nor vertical bingo.

Then using the notation from the article, the probability $P(H_k)$ of a horizontal bingo on exactly the k-th draw is

P(H_k) = \frac{mn(k-1)!(mn-k)!}{(mn)!}[x^{k-1}]g_1(x)

and the probability $P(V_k)$ of a vertical bingo on exactly the k-th draw is

P(V_k) = \frac{mn(k-1)!(mn-k)!}{(mn)!} {m-1 \choose n-1} [x^{k-n}](g_0(x)-g_1(x))

The summation

\sum_{k=n}^{(n-1)^2+1} P(H_k)

over all possible numbers of draws yields the overall probability of about 0.752 that a horizontal bingo is observed before a vertical one. Similarly, for the card game with (n,m)=(4,13), the probability that Alice wins is 22543417/34165005, or about 0.66. For the dice game– which requires a slight modification to the above formulation, left as an exercise for the reader– Alice wins with probability about 0.747.

Reference:

  1. Benjamin, A., Kisenwether, J., and Weiss, B., The Bingo Paradox, Math Horizons25(1) September 2017, p. 18-21 [PDF]
Posted in Uncategorized | 2 Comments

Digits of pi and Python generators

Introduction

This post was initially motivated by an interesting recent article by Chris Wellons discussing the Blowfish cipher. The Blowfish cipher’s subkeys are initialized with values containing the first 8336 hexadecimal digits of \pi, the idea being that implementers may compute these digits for themselves, rather than trusting the integrity of explicitly provided “random” values.

So, how do we compute hexadecimal– or decimal, for that matter– digits of \pi? This post describes several methods for computing digits of \pi and other well-known constants, as well as some implementation issues and open questions that I encountered along the way.

Pi is easy with POSIX

First, Chris’s implementation of the Blowfish cipher includes a script to automatically generate the code defining the subkeys. The following two lines do most of the work:

cmd='obase=16; scale=10040; a(1) * 4'
pi="$(echo "$cmd" | bc -l | tr -d '\n\\' | tail -c+3)"

This computes base 16 digits of \pi as a(1) * 4, or 4 times the arctangent of 1 (i.e., \tan(\pi/4) = 1), using the POSIX arbitrary-precision calculator bc. Simple, neat, end of story.

How might we do the same thing on Windows? There are plenty of approximation formulas and algorithms for computing digits of \pi, but to more precisely specify the requirements I was interested in, is there an algorithm that generates digits of \pi:

  • in any given base,
  • one digit at a time “indefinitely,” i.e., without committing to a fixed precision ahead of time,
  • with a relatively simple implementation,
  • using only arbitrary-precision integer arithmetic (such as is built into Python, or maybe C++ with a library)?

Bailey-Borwein-Plouffe

The Bailey-Borwein-Plouffe (BBP) formula seems ready-made for our purpose:

\pi = \sum_{k=0}^{\infty} \frac{1}{16^k} (\frac{4}{8k+1} - \frac{2}{8k+4} - \frac{1}{8k+5} - \frac{1}{8k+6})

This formula has the nice property that it may be used to efficiently compute the n-th hexadecimal digit of \pi, without having to compute all of the previous digits along the way. Roughly, the approach is to multiple everything by 16^n, then use modular exponentiation to collect and discard the integer part of the sum, leaving the fractional part with enough precision to accurately extract the n-th digit.

However, getting the implementation details right can be tricky. For example, this site provides source code and example data containing one million hexadecimal digits of \pi generated using the BBP formula… but roughly one out of every 55 digits or so is incorrect.

But suppose that we don’t want to “look ahead,” but instead want to generate all hexadecimal digits of \pi, one after the other from the beginning. Can we still make use of this formula in a simpler way? For example, consider the following Python implementation:

def pi_bbp():
    """Conjectured BBP generator of hex digits of pi."""
    a, b = 0, 1
    k = 0
    while True:
        ak, bk = (120 * k**2 + 151 * k + 47,
                  512 * k**4 + 1024 * k**3 + 712 * k**2 + 194 * k + 15)
        a, b = (16 * a * bk + ak * b, b * bk)
        digit, a = divmod(a, b)
        yield digit
        k = k + 1

for digit in pi_bbp():
    print('{:x}'.format(digit), end='')

The idea is similar to converting a fraction to a string in a given base: multiply by the base (16 in this case), extract the next digit as the integer part, then repeat with the remaining fractional part. Here a/b is the running fractional part, and a_k / b_k is the current term in the BBP summation. (Using the fractions module doesn’t significantly improve readability, and is much slower than managing the numerators and denominators directly.)

Now for the interesting part: although this implementation appears to behave correctly– at least for the first 500,000 digits where I stopped testing– it isn’t clear to me that it is always correct. That is, I don’t see how to prove that this algorithm will continue to generate correct hexadecimal digits of \pi indefinitely. Perhaps a reader can enlighten me.

(Initial thoughts: Since it’s relatively easy to show that each term a_k / b_k in the summation is positive, I think it would suffice to prove that the algorithm never generates an “invalid” hexadecimal digit that is greater than 15. But I don’t see how to do this, either.)

Interestingly, Bailey et. al. conjecture (see Reference 1 below) a similar algorithm that they have verified out to 10 million hexadecimal digits. The algorithm involves a strangely similar-but-slightly-different approach and formula:

def pi_bbmw():
    """Conjectured BBMW generator of hex digits of pi."""
    a, b = 0, 1
    k = 0
    yield 3
    while True:
        k = k + 1
        ak, bk = (120 * k**2 - 89 * k + 16,
                  512 * k**4 - 1024 * k**3 + 712 * k**2 - 206 * k + 21)
        a, b = (16 * a * bk + ak * b, b * bk)
        a = a % b
        yield 16 * a // b

Unfortunately, this algorithm is slower, requiring one more expensive arbitrary-precision division operation per digit than the BBP version.

Proven algorithms

Although the above two algorithms are certainly short and sweet, (1) they only work for generating hexadecimal digits (vs. decimal, for example), and (2) we don’t actually know if they are correct. Fortunately, there are other options.

Gibbons (Reference 2) describes an algorithm that is not only proven correct, but works for generating digits of \pi in any base:

def pi_gibbons(base=10):
    """Gibbons spigot generator of digits of pi in given base."""
    q, r, t, k, n, l = 1, 0, 1, 1, 3, 3
    while True:
        if 4 * q + r - t < n * t:
            yield n
            q, r, t, k, n, l = (base * q, base * (r - n * t), t, k,
                                (base * (3 * q + r)) // t - base * n, l)
        else:
            q, r, t, k, n, l = (q * k, (2 * q + r) * l, t * l, k + 1,
                                (q * (7 * k + 2) + r * l) // (t * l), l + 2)

The bad news is that this is by far the slowest algorithm that I investigated, nearly an order of magnitude slower than BBP on my laptop.

The good news is that there is at least one other algorithm, that is not only competitive with BBP in terms of throughput, but is also general enough to easily compute– in any base– not just the digits of \pi, but also e (the base of the natural logarithm), \phi (the golden ratio), \sqrt{2}, and others.

The idea is to express the desired value as a generalized continued fraction:

a_0 + \frac{b_1}{a_1 + \frac{b_2}{a_2 + \frac{b_3}{a_3 + \cdots}}}

where in particular \pi may be represented as

\pi = 0 + \frac{4}{1 + \frac{1^2}{3 + \frac{2^2}{5 + \cdots}}}

Then digits may be extracted similarly to the BBP algorithm above: iteratively refine the convergent (i.e., approximation) of the continued fraction until the integer part doesn’t change; extract this integer part as the next digit, then multiply the remaining fractional part by the base and continue. In Python:

def continued_fraction(a, b, base=10):
    """Generate digits of continued fraction a(0)+b(1)/(a(1)+b(2)/(...)."""
    (p0, q0), (p1, q1) = (a(0), 1), (a(1) * a(0) + b(1), a(1))
    k = 1
    while True:
        (d0, r0), (d1, r1) = divmod(p0, q0), divmod(p1, q1)
        if d0 == d1:
            yield d1
            p0, p1 = base * r0, base * r1
        else:
            k = k + 1
            x, y = a(k), b(k)
            (p0, q0), (p1, q1) = (p1, q1), (x * p1 + y * p0, x * q1 + y * q0)

This approach is handy because not only \pi, but other common constants as well, have generalized continued fraction representations in which the sequences (a_k), (b_k) are “nice.” To generate decimal digits of \pi:

for digit in continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1,
                                lambda k: 4 if k == 1 else (k - 1)**2, 10):
    print(digit, end='')

Or to generate digits of the golden ratio \phi:

for digit in continued_fraction(lambda k: 1,
                                lambda k: 1, 10):
    print(digit, end='')

Consuming blocks of a generator

Finally, once I got around to actually using the above algorithm to try to reproduce Chris’s original code generation script, I accidentally injected a bug that took some thought to figure out. Recall that the Blowfish cipher has a couple of (sets of) subkeys, each populated with a segment of the sequence of hexadecimal digits of \pi. So we would like to extract a block of digits, do something with it, then extract a subsequent block of digits, do something else, etc.

A simple way to do this in Python is with the built-in zip function, that takes multiple iterables as arguments, and returns a single generator that outputs tuples of elements from each of the inputs… and “truncates” to the length of the shortest input. In this case, to extract a fixed number of digits of \pi, we just zip the “infinite” digit generator together with a range of the desired length.

To more clearly see what happens, let’s simplify the context a bit and just try to print the first 10 decimal digits of \pi in two groups of 5:

digits = continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1,
                            lambda k: 4 if k == 1 else (k - 1)**2, 10)
for digit, k in zip(digits, range(5)):
    print(digit, end='')
print()
for digit, k in zip(digits, range(5)):
    print(digit, end='')

This doesn’t work: the resulting output blocks are (31415) and (26535)… but \pi = 3.1415926535\ldots. We “lost” the 9 in the middle.

The problem is that zip evaluates each input iterator in turn, stopping only when one of them is exhausted. In this case, during the 6th iteration of the first loop, we “eat” the 9 from the digit generator before we realize that the range iterator is exhausted. When we continue to the second block of 5 digits, we can’t “put back” the 9.

This is easy to fix: just reverse the order of the zip arguments, so the range is exhausted first, before eating the extra element of the “real” sequence we’re extracting from.

for k, digit in zip(range(5), digits):
    print(digit, end='')
print()
for k, digit in zip(range(5), digits):
    print(digit, end='')

This works as desired, with output blocks (31415) and (92653).

References:

  1. Bailey, D., Borwein, J., Mattingly, A., Wightwick, G., The Computation of Previously Inaccessible Digits of \pi^2 and Catalan’s Constant, Notices of the American Mathematical Society, 60(7) 2013, p. 844-854 [PDF]
  2. Gibbons, J., Unbounded Spigot Algorithms for the Digits of Pi, American Mathematical Monthly, 113(4) April 2006, p. 318-328 [PDF]
Posted in Uncategorized | 2 Comments

Floating-point agreement between MATLAB and C++

Introduction

A common development approach in MATLAB is to:

  1. Write MATLAB code until it’s unacceptably slow.
  2. Replace the slowest code with a C++ implementation called via MATLAB’s MEX interface.
  3. Goto step 1.

Regression testing the faster MEX implementation against the slower original MATLAB can be difficult. Even a seemingly line-for-line, “direct” translation from MATLAB to C++, when provided with exactly the same numeric inputs, executed on the same machine, with the same default floating-point rounding mode, can still yield drastically different output. How does this happen?

There are three primary causes of such differences, none of them easy to fix. The purpose of this post is just to describe these causes, focusing on one in particular that I learned occurs more frequently than I realized.

1. The butterfly effect

This is where the drastically different results typically come from. Even if the inputs to the MATLAB and MEX implementations are identical, suppose that just one intermediate calculation yields even the smallest possible difference in its result… and is followed by a long sequence of further calculations using that result. That small initial difference can be greatly magnified in the final output, due to cumulative effects of rounding error. For example:

x = 0.1;
for k in 1:100
    x = 4 * (1 - x);
end
% x == 0.37244749676375793

double x = 0.10000000000000002;
for (int k = 1; k <= 100; ++k) {
    x = 4 * (1 - x);
}
// x == 0.5453779481420313

This example is only contrived in its simplicity, exaggerating the “speed” of divergence with just a few hundred floating-point operations. Consider a more realistic, more complex Monte Carlo simulation of, say, an aircraft autopilot maneuvering in response to simulated sensor inputs. In a particular Monte Carlo iteration, the original MATLAB implementation might successfully dampen a severe Dutch roll, while the MEX version might result in a crash (of the aircraft, I mean).

Of course, for this divergence of behavior to occur at all, there must be that first difference in the result of an intermediate calculation. So this “butterfly effect” really is just an effect— it’s not a cause at all, just a magnified symptom of the two real causes, described below.

2. Compiler non-determinism

As far as I know, the MATLAB interpreter is pretty well-behaved and predictable, in the sense that MATLAB source code explicitly specifies the order of arithmetic operations, and the precision with which they are executed. A C++ compiler, on the other hand, has a lot of freedom in how it translates source code into the corresponding sequence of arithmetic operations… and possibly even the precision with which they are executed.

Even if we assume that all operations are carried out in double precision, order of operations can matter; the problem is that this order is not explicitly controlled by the C++ source code being compiled (edit: at least for some speed-optimizing compiler settings). For example, when a compiler encounters the statement double x = a+b+c;, it could emit code to effectively calculate (a+b)+c, or a+(b+c), which do not necessarily produce the same result. That is, double-precision addition is not associative:

(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3) % this is false

Worse, explicit parentheses in the source code may help, but it doesn’t have to.

Another possible problem is intermediate precision. For example, in the process of computing (a+b)+c, the intermediate result t=(a+b) might be computed in, say, 80-bit precision, before clamping the final sum to 64-bit double precision. This has bitten me in other ways discussed here before; Bruce Dawson has several interesting articles with much more detail on this and other issues with floating-point arithmetic.

3. Transcendental functions

So suppose that we are comparing the results of MATLAB and C++ implementations of the same algorithm. We have verified that the numeric inputs are identical, and we have also somehow verified that the arithmetic operations specified by the algorithm are executed in the same order, all in double precision, in both implementations. Yet the output still differs between the two.

Another possible– in fact likely– cause of such differences is in the implementation of transcendental functions such as sin, cos, atan2, exp, etc., which are not required by IEEE-754-2008 to be correctly rounded due to the table maker’s dilemma. For example, following is the first actual instance of this problem that I encountered years ago, reproduced here in MATLAB R2017a (on my Windows laptop):

x = 193513.887169782;
y = 44414.97148164646;
atan2(y, x) == 0.2256108075334872

while the corresponding C++ implementation (still called from MATLAB, built as a MEX function using Microsoft Visual Studio 2015) yields

#include <cmath>
...
std::atan2(y, x) == 0.22561080753348722;

The two results differ by an ulp, with (in this case) the MATLAB version being correctly rounded.

(Rant: Note that both of the above values, despite actually being different, display as the same 0.225610807533487 in the MATLAB command window, which for some reason neglects to provide “round-trip” representations of its native floating-point data type. See here for a function that I find handy when troubleshooting issues like this.)

What I found surprising, after recently exploring this issue in more detail, is that the above example is not an edge case: the MATLAB and C++ cmath implementations of the trigonometric and exponential functions disagree quite frequently– and furthermore, the above example notwithstanding, the C++ implementation tends to be more accurate more of the time, significantly so in the case of atan2 and the exponential functions, as the following figure shows.

Probability of MATLAB/C++ differences in function evaluation for input randomly selected from the unit interval.

The setup: on my Windows laptop, with MATLAB R2017a and Microsoft Visual Studio 2015 (with code compiled from MATLAB as a MEX function with the provided default compiler configuration file), I compared each function’s output for 1 million randomly generated inputs– or pairs of inputs in the case of atan2— in the unit interval.

Of those 1 million inputs, I calculated how many yielded different output from MATLAB and C++, where the differences fell into 3 categories:

  • Red indicates that MATLAB produced the correctly rounded result, with the exact value between the MATLAB and C++ outputs (i.e., both implementations had an error less than an ulp).
  • Gray indicates that C++ produced the correctly rounded result, with both implementations having an error of less than an ulp.
  • Blue indicates that C++ produced the correctly rounded result, between the exact value and the MATLAB output (i.e., MATLAB had an error greater than an ulp).

(A few additional notes on test setup: first, I used random inputs in the unit interval, instead of evenly-spaced values with domains specific to each function, to ensure testing all of the mantissa bits in the input, while still allowing some variation in the exponent. Second, I also tested addition, subtraction, multiplication, division, and square root, just for completeness, and verified that they agree exactly 100% of the time, as guaranteed by IEEE-754-2008… at least for one such operation in isolation, eliminating any compiler non-determinism mentioned earlier. And finally, I also tested against MEX-compiled C++ using MinGW-w64, with results identical to MSVC++.)

For most of these functions, the inputs where MATLAB and C++ differ are distributed across the entire unit interval. The two-argument form of the arctangent is particularly interesting. The following figure “zooms in” on the 16.9597% of the sample inputs that yielded different outputs, showing a scatterplot of those inputs using the same color coding as above. (The red, where MATLAB provides the correctly rounded result, is so infrequent it is barely visible in the summary bar chart.)

Scatterplot of points where MATLAB/C++ differ in evaluation of atan2(y,x), using the same color coding as above.

Conclusion

This might all seem rather grim. It certainly does seem hopeless to expect that a C++ translation of even modestly complex MATLAB code will preserve exact agreement of output for all inputs. (In fact, it’s even worse than this. Even the same MATLAB code, executed in the same version of MATLAB, but on two different machines with different architectures, will not necessarily produce identical results given identical inputs.)

But exact agreement between different implementations is rarely the “real” requirement. Recall the Monte Carlo simulation example described above. If we run, say, 1000 iterations of a simulation in MATLAB, and the same 1000 iterations with a MEX translation, it may be that iteration #137 yields different output in the two versions. But that should be okay… as long as the distribution over all 1000 outputs is the same– or sufficiently similar– in both cases.

Posted in Uncategorized | 3 Comments

What is (-1&3)?

This is just nostalgic amusement.  I recently encountered the following while poking around in some code that I had written a disturbingly long time ago:

switch (-1&3) {
    case 1: ...
    case 2: ...
    case 3: ...
...
}

What does this code do?  This is interesting because the switch expression is a constant that could be evaluated at compile time (indeed, this could just as well have been implemented with a series of #if/#elseif preprocessor directives instead of a switch-case statement).

As usual, it seems more fun to present this as a puzzle, rather than just point and say, “This is what I did.”  For context, or possibly as a hint, this code was part of a task involving parsing and analyzing digital terrain elevation data (DTED), where it makes at least some sense.

Posted in Uncategorized | 2 Comments

The following are equivalent

I have been reviewing Rosen’s Discrete Mathematics and Its Applications textbook for a course this fall, and I noticed an interesting potential pitfall for students in the first chapter on logic and proofs.

Many theorems in mathematics are of the form, “p if and only if q,” where p and q are logical propositions that may be true or false.  For example:

Theorem 1: An integer m is even if and only if m+2 is even.

where in this case p is “m is even” and q is “m+2 is even.”  The statement of the theorem may itself be viewed as a proposition p \leftrightarrow q, where the logical connective \leftrightarrow is read “if and only if,” and behaves like Boolean equality.  Intuitively, p \leftrightarrow q states that “p and q are (materially) equivalent; they have the same truth value, either both true or both false.”

(Think Boolean expressions in your favorite programming language; for example, the proposition p \land q, read “p and q,” looks like p && q in C++, assuming that p and q are of type bool.  Similarly, the proposition p \leftrightarrow q looks like p == q in C++.)

Now consider extending this idea to the equivalence of more than just two propositions.  For example:

Theorem 2: Let m be an integer.  Then the following are equivalent:

  1. m is even.
  2. m+2 is even.
  3. m-2 is even.

The idea is that the three propositions above (let’s call them p_1, p_2, p_3) always have the same truth value; either all three are true, or all three are false.

So far, so good.  The problem arises when Rosen expresses this general idea of equivalence of multiple propositions p_1, p_2, \ldots, p_n as

p_1 \leftrightarrow p_2 \leftrightarrow \ldots \leftrightarrow p_n

Puzzle: What does this expression mean?  A first concern might be that we need parentheses to eliminate any ambiguity.  But almost unfortunately, it can be shown that the \leftrightarrow connective is associative, meaning that this is a perfectly well-formed propositional formula even without parentheses.  The problem is that it doesn’t mean what it looks like it means.

Reference:

  • Rosen, K. H. (2011). Discrete Mathematics and Its Applications (7th ed.). New York, NY: McGraw-Hill. ISBN-13: 978-0073383095
Posted in Uncategorized | 4 Comments

Dice puzzle

I recently encountered the following interesting problem:

Suppose that I put 6 identical dice in a cup, and roll them simultaneously (as in Farkle, for example).  Then you take those same 6 dice, and roll them all again.  What is the probability that we both observe the same outcome?  For example, we may both roll one of each possible value (1-2-3-4-5-6, but not necessarily in order), or we may both roll three 3s and three 6s, etc.

I like this problem as an “extra” for combinatorics students learning about generating functions.  A numeric solution likely requires some programming (but I’ve been wrong about that here before), but implementation is not overly complex, while being slightly beyond the “usual” type of homework problem in the construction of its solution.

Posted in Uncategorized | 2 Comments

A number concatenation problem

Introduction

Consider the following problem: given a finite set of positive integers, arrange them so that the concatenation of their decimal representations is as small as possible.  For example, given the numbers {1, 10, 12}, the arrangement (10, 1, 12) yields the minimum possible value 10112.

I saw a variant of this problem in a recent Reddit post, where it was presented as an “easy” programming challenge, referring in turn to a blog post by Santiago Valdarrama describing it as one of “Five programming problems every software engineer should be able to solve in 1 hour.”

I think the problem is interesting in that it seems simple and intuitive– and indeed it does have a solution with a relatively straightforward implementation– but there are also several “intuitive” approaches that don’t work… and even for the correct implementation, there is some subtlety involved in proving that it really works.

Brute force

First, the following Python 3 implementation simply tries all possible arrangements, returning the lexicographically smallest:

import itertools

def min_cat(num_strs):
    return min(''.join(s) for s in itertools.permutations(num_strs))

(Aside: for convenience in the following discussion, all inputs are assumed to be a list of strings of decimal representations of positive integers, rather than the integers themselves.  This lends some brevity to the code, without adding to or detracting from the complexity of the algorithms.)

This implementation works because every concatenated arrangement has the same length, so a lexicographic comparison is equivalent to comparing the corresponding numeric values.  It’s unacceptably inefficient, though, since we have to consider all n! possible arrangements of n inputs.

Sorting

We can do better, by sorting the inputs in non-decreasing order, and concatenating the result.  But this is where the problem gets tricky: what order relation should we use?

We can’t just use the natural ordering on the integers; using the same earlier example, the sorted arrangement (1, 10, 12) yields 11012, which is larger than the minimum 10112.  Similarly, the sorted arrangement (2, 11) yields 211, which is larger than the minimum 112.

We can’t use the natural lexicographic ordering on strings, either; the initial example (1, 10, 12) fails again here.

The complexity arises because the numbers in a given input set may have different lengths, i.e. numbers of digits.  If all of the numbers were guaranteed to have the same number of digits, then the numeric and lexicographic orderings are the same, and both yield the correct solution.  Several users in the Reddit thread, and even Valdarrama, propose “padding” each input in various ways before sorting to address this, but this is also tricky to get right.  For example, how should the inputs {12, 121} be padded so that a natural lexicographic ordering yields the correct minimum value 12112?

There is a way to do this, which I’ll leave as an exercise for the reader.  Instead, consider the following solution (still Python 3):

import functools

def cmp(x, y):
    return int(x + y) - int(y + x)

def min_cat(num_strs):
    return ''.join(sorted(num_strs, key=functools.cmp_to_key(cmp)))

There are several interesting things going on here.  First, a Python-specific wrinkle: we need to specify the order relation \prec by which to sort.  This actually would have looked slightly simpler in the older Python 2.7, where you could specify a binary comparison function directly.  In Python 3, you can only provide a unary key function to apply to each element in the list, and sort by that.  It’s an interesting exercise in itself to work out how to “convert” a comparison function into the corresponding key function; here we lean on the built-in functools.cmp_to_key to do it for us.  (This idea of specifying an order relation by a natural comparison without a corresponding natural key has been discussed here before, in the context of Reddit’s comment ranking algorithm.)

Second, recall that the input num_strs is a list of strings, not integers, so in the implementation of the comparison cmp(x, y) , the arguments are strings, and the addition operators are concatenation.  The comparison function returns a negative value if the concatenation xy, interpreted as an integer, is less than yx, zero if they are equal, or a positive value if xy is greater than yx.  The intended effect is to sort according to the relation x \prec y defined as xy < yx.

It works… but should it?

This implementation has a nice intuitive justification: suppose that the entire input list contained just two strings x and y.  Then the comparison function effectively realizes the “brute force” evaluation of the two possible arrangements xy and yx.

However, that same intuitive reasoning becomes dangerous as soon as we consider input lists with more than two elements.  That comparison function should bother us, for several reasons:

First, it’s not obvious that the resulting sorted ordering is even well-defined.  That is, is the order relation \prec a strict weak ordering of the set of (decimal string representations of) positive integers?  It certainly isn’t a total ordering, since distinct values can compare as “equal:” for example, consider (1, 11), or (123, 123123), etc.

Second, even assuming the comparison function does realize a strict weak ordering (we’ll prove this shortly), that ordering has some interesting properties.  For example, unlike the natural ordering on the positive integers, there is no smallest element.  That is, for any x, we can always find another strictly lesser y \prec x (as a simple example, note that x0 \prec x, e.g., 1230 \prec 123).  Also unlike the natural ordering on the positive integers, this ordering is dense; given any pair x \prec y, we can always find a third value z in between, i.e., x \prec z \prec y.

Finally, and perhaps most disturbingly, observe that a swap-based sorting algorithm will not necessarily make “monotonic” progress toward the solution: swapping elements that are “out of order” in terms of the comparison function may not always improve the overall situation.  For example, consider the partially-sorted list (12, 345, 1), whose concatenation yields 123451.  The comparison function indicates that 12 and 1 are “out of order” (121>112), but swapping them makes things worse: the concatenation of (1, 345, 12) yields the larger value 134512.

Proof of correctness

Given all of this perhaps non-intuitive weirdness, it seems worth being more rigorous in proving that the above implementation actually does work.  We do this in two steps:

Theorem 1: The relation \prec defined by the comparison function cmp is a strict weak ordering.

Proof: Irreflexivity follows from the definition.  To show transitivity, let x, y, z be positive integers with a, b, c digits, respectively, with x \prec y and y \prec z.  Then

10^b x+y < 10^a y+x and 10^c y+z < 10^b z+y

Thus,

x(10^c-1) < x \frac{z}{y}(10^b-1) < z(10^a-1)

10^c x-x < 10^a z-z

10^c x+z < 10^a z+x

i.e., x \prec z.  Incomparability of x and y corresponds to xy=yx; this is an equivalence relation, with reflexivity and symmetry following from the definition, and transitivity shown exactly as above (with equality in place of inequality).

Theorem 2: Concatenating positive integers sorted by \prec yields the minimum value among all possible arrangements.

Proof: Let x_1 x_2 \ldots x_n be the concatenation of an arrangement of positive integers with minimum value, and suppose that it is not ordered by \prec, i.e., x_i \succ x_{i+1} for some 1 \leq i < n.  Then the concatenation x_1 x_2 \ldots x_{i+1} x_i \ldots x_n is strictly smaller, a contradiction.

(Note that this argument is only “simple” because x_i and x_{i+1} are adjacent.  As mentioned above, swapping non-adjacent elements that are out of order may not in general decrease the overall value.)

Posted in Uncategorized | 4 Comments