Hilbert halftone art


This post was motivated by a recent attempt to transform a photograph into a large digital print, in what I hoped would be a mathematically interesting way. The idea was pretty simple: convert the (originally color) image into a black-on-white line drawing– a white background, with a single, continuous, convoluted black curve, one pixel wide.

This isn’t a new idea. One example of how to do this is to convert the image into a solution of an instance of the traveling salesman problem, with more “cities” clustered in darker regions of the source image. But I wanted to do something slightly different, with more explicitly visible structure… which doesn’t necessarily translate to more visual appeal: draw a Hilbert space-filling curve, but vary the order of the curve (roughly, the depth of recursion) locally according to the gray level of the corresponding pixels of the source image.

After some experimenting, I settled on the transformation described in the figure below. Each pixel of the source image is “inflated” to an 8-by-8 block of pixels in the output, with a black pixel (lower left) represented by a second-order Hilbert curve, and a white pixel (upper left) by just a line segment directly connecting the endpoints of the block, with two additional gray levels in between, each connecting progressively more/fewer points along the curve.

Conversion of 2×2-pixel image, with each of four gray levels mapped to corresponding 8×8 block of (approximated) Hilbert curve.


The figure below shows an example of creating an image for input to the algorithm. There are two challenges to consider:

  • Size: The 8-fold inflation of each pixel means that if we want the final output to be 1024-by-1024, then the input image must be 128-by-128, as shown here. (For my project, I could afford a 512-by-512 input, with larger than single-pixel steps between points on the curve, yielding output suitable for a 36-inch print.)
  • Grayscale: The color in the input image must be quantized to just four gray levels, using your favorite photo editor. (I did this in Mathematica.)

Conversion of original color image to 128-by-128 image with 4 gray levels.

The figure below shows the resulting output. You may have to zoom in to see the details of the curve, especially in the black regions.

Black and white 1024-by-1024 image containing a single black Hilbert curve, one pixel in width.

Source code

Following is the Python source that does the transformation. It uses the Hilbert curve encoding module from my allRGB experiment, and I used Pygame for the image formatting so that I could watch the output as it was created.

import hilbert
import pygame

BLACK = (0, 0, 0, 255)
DARK_GRAY = (85, 85, 85, 255)
LIGHT_GRAY = (160, 160, 160, 255)
WHITE = (255, 255, 255, 255)

STEPS = {BLACK: [1] * 15,
         DARK_GRAY: [1, 1, 1, 1, 4, 1, 1, 5],
         LIGHT_GRAY: [4, 7, 4],
         WHITE: [15]}

class Halftone:
    def __init__(self, image, step):
        self.h = hilbert.Hilbert(2)
        self.index = -1
        self.pos = (0, 0)
        width, height = image.get_size()
        self.target = pygame.Surface((step * 4 * width, step * 4 * height))
        for pixel in range(width * height):
            self.move(1, step)
            for n in STEPS[tuple(image.get_at([w // 4 for w in self.pos]))]:
                self.move(n, step)

    def move(self, n, step):
        self.index = self.index + n
        next_pos = self.h.encode(self.index)
        pygame.draw.line(self.target, BLACK, [step * w for w in self.pos],
                                             [step * w for w in next_pos])
        self.pos = next_pos

if __name__ == '__main__':
    import sys
    step = int(sys.argv[1])
    for filename in sys.argv[2:]:
        pygame.image.save(Halftone(pygame.image.load(filename), step).target,
                          filename + '.ht.png')
Posted in Uncategorized | 2 Comments

Probability of playable racks in Scrabble


Earlier this year, I spent some time calculating the probability of a Scrabblebingo:” drawing a rack of 7 tiles and playing all of them in a single turn to spell a 7-letter word. The interesting part of the analysis was the use of the Google Books Ngrams data set to limit the dictionary of playable words by their frequency of occurrence, to account for novice players like me that might only know “easier” words in more common use. The result was that the probability of drawing a rack of 7 letters that can play a 7-letter word is about 0.132601… if we allow the entire official Scrabble dictionary, including words like zygoses (?). The probability is only about 0.07 if we assume that I only know about a third– the most frequently occurring third– of the 7-letter words in the dictionary.

But given just how poorly I play Scrabble, it seems optimistic to focus on a bingo. Instead of asking whether we can play a 7-letter word using the entire rack, let’s consider what turns out to be a much harder problem, namely, whether the rack is playable at all: that is,

Problem: What is the probability that a randomly drawn rack is playable, i.e., contains some subset of tiles, not necessarily all 7, that spell a word in the dictionary?

Minimal subracks

The first interesting aspect of this problem is the size of the dictionary. There are 187,632 words in the 2014 Official Tournament and Club Word List (the latest for which an electronic version is accessible)… but we can immediately discard nearly 70% of them, those with more than 7 letters, leaving 56,624 words with at most 7 letters. (There is exactly one such word, pizzazz, that I’ll leave in our list, despite the fact that it is one of 13 words in the dictionary with no hope of ever being played: it has too many z‘s!)

But we can trim the list further, first by noting that the order of letters in a word doesn’t matter– this helps a little but not much– and second by noting that some words may contain other shorter words as subsets. For example, the single most frequently occurring word, the, contains the shorter word he. So when evaluating a rack to see whether it is playable or not, we only need to check if we can play he; if we can’t, then we can’t play the, either.

In other words, we only need to consider minimal subracks: the minimal elements of the inclusion partial order on sets of tiles spelling words in the dictionary. This is a huge savings: instead of 56,624 words with at most 7 letters, we only need to consider the following 223 minimal “words,” or unordered subsets of tiles, with blank tiles indicated by an underscore:

__, _a, _b, _cv, _d, _e, _f, _g, _h, _i, _j, _k, _l, _m, _n, _o, _p, _q, _r, _s, _t, _u, _vv, _w, _x, _y, _z, aa, ab, aco, acv, ad, ae, af, ag, ah, ai, ak, al, am, an, aov, ap, aqu, ar, as, at, auv, avv, aw, ax, ay, az, bbu, bcu, bdu, be, bfu, bgu, bi, bklu, bllu, bo, brr, bru, by, cciiv, ccko, cdu, cee, cei, cekk, ceu, cffu, cfku, cgku, cgllyy, chly, chrtw, ciirr, ciy, cklu, cllu, cmw, coo, coz, cru, cry, cuz, ddu, de, dfu, dgu, di, djuy, dkuu, dlu, do, dru, dry, duw, eeg, eej, eek, eequu, eev, eez, ef, egg, egk, egv, eh, eiv, eju, eku, ekz, el, em, en, eo, ep, er, es, et, ew, ex, ey, ffpt, fgu, fi, flu, fly, fo, fru, fry, ghlly, gi, gju, glu, go, gpy, grr, gru, gsyyyz, guv, guy, hi, hm, hnt, ho, hpt, hpy, hs, hty, hu, hwy, iirvz, ijzz, ik, il, im, in, io, ip, iq, irry, is, it, ivy, iwz, ix, jjuu, jkuu, jo, kkoo, klru, kouz, kruu, kst, ksy, kuy, lllu, llxyy, lnxy, lo, lpy, lsy, luu, luv, mm, mo, mu, my, no, nsy, nu, nwy, ooz, op, or, os, ot, ow, ox, oy, ppty, pry, pst, psy, ptyy, pu, pxy, rty, ruy, rwy, sty, su, tu, uuyz, uwz, ux, uzz, zzz

The figure below expands on this calculation, showing the number of minimal subracks on the y-axis if we limit the dictionary of words we want to be able to play in various ways: either by frequency on the x-axis, where dumber is to the left; or by minimum word length, where the bottom curve labeled “>=2” is the entire dictionary, and the top curve labeled “>=7” requires a bingo.

Number of minimal playable subsets of tiles in inclusion order on the set of all playable words in OTCWL 2014, restricted by length and/or frequency.

As this figure shows, we benefit greatly– computationally, that is– from being able to play short words. At the other extreme, if we require a bingo, then effectively every word is a minimal subrack, and we actually suffer further inflation by including the various ways to spell each word with blanks.

Finding a subrack in a trie

At this point, the goal is to compute the probability that a randomly drawn rack of 7 tiles contains at least one of the minimal subracks described above. It turns out that there are only 3,199,724 distinct (but not equally likely) possible racks, so it should be feasible to simply enumerate them, testing each rack one by one, accumulating the frequency of those that are playable… as long as each test for containing a minimal subrack is reasonably fast. (Even if we just wanted to estimate the probability via Monte Carlo simulation by sampling randomly drawn racks, we still need the same fast test for playability.)

This problem is ready-made for a trie data structure, an ordered tree where each edge is labeled with a letter (or blank), and each leaf vertex represents a minimal playable subrack containing the sorted multiset of letters along the path to the leaf. (We don’t have to “mark” any non-leaf vertices; only leaves represent playable words, since we reduced the dictionary to the antichain of minimal subracks.)

The following Python code represents a trie of multisets as a nested dictionary with single-element keys (letters in this case), implementing the two operations needed for our purpose:

  • Inserting a “word” multiset given as a sorted list of elements (or a sorted string of characters in this case),
  • Testing whether a given word (i.e., a drawn rack of 7 tiles, also sorted) contains a word in the trie as a subset.

Insertion is straightforward; the recursive subset test is more interesting:

def trie_insert(t, word):
    for letter in word:
        if not letter in t:
            t[letter] = {}
        t = t[letter]

def trie_subset(t, word):
    if not t:
        return True
    if not word:
        return False
    return (word[0] in t and trie_subset(t[word[0]], word[1:]) or
            trie_subset(t, word[1:]))

Enumerating racks

Armed with a test for whether a given rack is playable, it remains to enumerate all possible racks, testing each one in turn. In more general terms, we have a multiset of 100 tiles of 27 types, and we want to enumerate all possible 7-subsets. The following Python code does this, representing a multiset (or subset) as a list of counts of each element type. This is of interest mainly because it’s my first occasion to use the yield from syntax:

def multisubsets(multiset, r, prefix=[]):
    if r == 0:
        yield prefix + [0] * len(multiset)
    elif len(multiset) > 0:
        for k in range(min(multiset[0], r) + 1):
            yield from multisubsets(multiset[1:], r - k, prefix + [k])

(This and the rest of the Scrabble-specific code are available at the usual location here.)

The final results are shown in the figure below, with convention for each curve similar to the previous figure.

Probability that a random rack of 7 tiles is playable, containing a word in OTCWL 2014 restricted by length and/or frequency.

For example, the bottom curve essentially reproduces the earlier bingo analysis of the probability of playing a word of at least (i.e., exactly) 7 letters from a randomly drawn rack. At the other extreme, if we have the entire dictionary at our disposal (with all those short words of >=2 letters), it’s really hard to draw an unplayable rack (probability 0.00788349). Even if we limit ourselves to just the 100 or so most frequently occurring words– many of which are still very short– we can still play 9 out of 10 racks.

In between, suppose that even though we might know many short words, ego prevents actually playing anything shorter than, say, 5 letters in the first turn. Now the probabilities start to depend more critically on whether we are an expert who knows most of the words in the official dictionary, or a novice who might only know a third or fewer of them.

Posted in Uncategorized | Leave a comment

Analysis of Bingo


Suppose that Alice and Bob play a game for a dollar: they roll a single six-sided die repeatedly, until either:

  1. Alice wins if they observe each of the six possible faces at least once, or
  2. Bob wins if they observe any one face six times.

Would you rather be Alice or Bob in this scenario? Or does it matter? You can play a similar game with a deck of playing cards: shuffle the deck, and deal one card at a time from the top of the deck, with Alice winning when all four suits are dealt, and Bob winning when any particular suit is dealt four times. (This version has the advantage of providing a built-in record of the history of deals, in case of argument over who actually wins.) Again, would you rather be Alice or Bob?

It turns out that Alice has a distinct advantage in both games, winning nearly three times more often than Bob in the dice version, and nearly twice as often in the card version. The objective of this post is to describe some interesting mathematics involved in these games, and relate them to the game of Bingo, where a similar phenomenon is described in a recent Math Horizons article (see reference below): the winning Bingo card among multiple players is much more likely to have a horizontal bingo (all numbers in some row) than vertical (all numbers in some column).

Bingo with a single card

First, let’s describe how Bingo works with just one player. A Bingo card is a 5-by-5 grid of numbers, with each column containing 5 numbers randomly selected without replacement from 15 possibilities: the first “B” column is randomly selected from the numbers 1-15, the second “I” column is selected from 16-30, the third column from 31-45, the fourth column from 46-60, and the fifth “O” column from 61-75. An example is shown below.

Example of an American Bingo card.

A “caller” randomly draws, without replacement, from a pool of balls numbered 1 through 75, calling each number in turn as it is drawn, with the player marking the called number if it is present on his or her card. The player wins by marking all 5 squares in any row, column, or diagonal. (One minor wrinkle in this setup is that, in typical American-style Bingo, the center square is “free,” considered to be already marked before any numbers are called.)

It will be useful to generalize this setup with parameters (n,m), where each card is n \times n with each column selected from m possible values, so that standard Bingo corresponds to (n,m)=(5,15).

We can compute the probability distribution of the number of draws required for a single card to win. Bill Butler describes one approach, enumerating the 2^{n^2-1}=2^{24} possible partially-marked cards and computing the probability of at least one bingo for each such marking.

Alternatively, we can use inclusion-exclusion to compute the cumulative distribution directly, by enumerating just the 2^{2n+2}=2^{12} possible combinations of horizontal, vertical, and diagonal bingos (of which there are 5, 5, and 2, respectively) on a card with k marks. In Mathematica:

bingoSets[n_, free_: True, diag_: True] :=
 Module[{card = Partition[Range[n^2], n]},
  If[free, card[[Ceiling[n/2], Ceiling[n/2]]] = 0];
    card, Transpose[card],
    If[diag, {Diagonal[card], Diagonal[Reverse[card]]}, {}]],
   0, Infinity]]

bingoCDF[k_, nm_, bingos_] :=
  1 - Total@Map[
      (j = Length[Union @@ #];
        (-1)^Length[#] Binomial[nm - j, k - j]) &,
      Subsets[bingos]]/Binomial[nm, k]]

bingos = bingoSets[5, False, True];
cdf = Table[bingoCDF[k, 75, bingos], {k, 1, 75}];

(Note the optional arguments specifying whether the center square is free, and whether diagonal bingo is allowed. It will be convenient shortly to consider a simplified version of the game, where these two “special” rules are discarded.)

The following figure shows the resulting distribution, with the expected number of 43.546 draws shown in red.

Cumulative probability distribution of a single-card bingo in at most the given number of draws. The average of 43.546 draws is shown in red.

Independence with 2 or more cards

Before getting to the “horizontal is more likely than vertical” phenomenon, it’s worth pointing out another non-intuitive aspect of Bingo. If instead of just a single card, we have a game with multiple players, possibly with thousands of different cards, what is the distribution of number of draws until someone wins?

If P_1(X \leq k) is the cumulative distribution for a single card as computed above, then since each of multiple cards is randomly– and independently– “generated,” intuitively it seems like the probability P_j(X \leq k) of at least one winning bingo among j cards in at most k draws should be given by

P_j(X \leq k) \stackrel{?}{=} 1-(1-P_1(X \leq k))^j

Butler uses exactly this approach. However, this is incorrect; although the values in the squares of multiple cards are independent, the presence or absence of winning bingos are not. Perhaps the best way to see this is to consider a “smaller” simplified version of the game, with (n,m)=(2,2), so that there are only four equally likely possible distinct cards:

The four possible cards in (2,2) Bingo.

Let’s further simplify the game so that only horizontal and vertical bingos are allowed, with no diagonals. Then the game must end after either two or three draws: with a single card, it ends in two draws with probability 2/3. However, with two cards, the probability of a bingo in two draws is 5/6, not 1-(1-2/3)^2=8/9.

Horizontal bingos with many cards

Finally, let’s come back to the initial problem: suppose that there are a large number of players, with so many cards in play that we are effectively guaranteed a winner as soon as either:

  1. At least one number from each of the n=5 column groups is drawn, resulting in a horizontal bingo on some card; or
  2. At least n=5 of m=15 possible numbers is drawn from any one particular column group, resulting in a vertical bingo on some card.

(Let’s ignore the free square and diagonal bingos for now; the former is easily handled but unnecessarily complicates the analysis, while the latter would mean that (1) and (2) are not mutually exclusive.)

Then the interesting observation is that a horizontal bingo (1) is over three times more likely to occur than a vertical bingo (2). Furthermore, this setup– Bingo with a large number of cards– is effectively the same as the card and dice games described in the introduction: Bingo is (n,m)=(5,15), the card game is (4,13), and the dice version is effectively (6,\infty).

The Math Horizons article referenced below describes an approach to calculating these probabilities, which involves enumerating integer partitions. However, this problem is ready-made for generating functions, which takes care of the partition house-keeping for us: let’s define

g_a(x) = \left(\sum_{j=a}^{n-1} {m \choose j}x^j\right)^{n-1}

so that, for example, for Bingo with no free square,

g_1(x) = \left({15 \choose 1}x^1 + {15 \choose 2}x^2 + {15 \choose 3}x^3 + {15 \choose 4}x^4\right)^4

Intuitively, each factor corresponds to a column, where each coefficient of x^j indicates the number of ways to draw exactly j numbers from that column (with some minimum number from each column specified by a). The overall coefficient of x^k indicates the number of ways to draw k numbers in total, with neither a horizontal nor vertical bingo.

Then using the notation from the article, the probability P(H_k) of a horizontal bingo on exactly the k-th draw is

P(H_k) = \frac{mn(k-1)!(mn-k)!}{(mn)!}[x^{k-1}]g_1(x)

and the probability P(V_k) of a vertical bingo on exactly the k-th draw is

P(V_k) = \frac{mn(k-1)!(mn-k)!}{(mn)!} {m-1 \choose n-1} [x^{k-n}](g_0(x)-g_1(x))

The summation

\sum_{k=n}^{(n-1)^2+1} P(H_k)

over all possible numbers of draws yields the overall probability of about 0.752 that a horizontal bingo is observed before a vertical one. Similarly, for the card game with (n,m)=(4,13), the probability that Alice wins is 22543417/34165005, or about 0.66. For the dice game– which requires a slight modification to the above formulation, left as an exercise for the reader– Alice wins with probability about 0.747.


  1. Benjamin, A., Kisenwether, J., and Weiss, B., The Bingo Paradox, Math Horizons25(1) September 2017, p. 18-21 [PDF]
Posted in Uncategorized | 2 Comments

Digits of pi and Python generators


This post was initially motivated by an interesting recent article by Chris Wellons discussing the Blowfish cipher. The Blowfish cipher’s subkeys are initialized with values containing the first 8336 hexadecimal digits of \pi, the idea being that implementers may compute these digits for themselves, rather than trusting the integrity of explicitly provided “random” values.

So, how do we compute hexadecimal– or decimal, for that matter– digits of \pi? This post describes several methods for computing digits of \pi and other well-known constants, as well as some implementation issues and open questions that I encountered along the way.

Pi is easy with POSIX

First, Chris’s implementation of the Blowfish cipher includes a script to automatically generate the code defining the subkeys. The following two lines do most of the work:

cmd='obase=16; scale=10040; a(1) * 4'
pi="$(echo "$cmd" | bc -l | tr -d '\n\\' | tail -c+3)"

This computes base 16 digits of \pi as a(1) * 4, or 4 times the arctangent of 1 (i.e., \tan(\pi/4) = 1), using the POSIX arbitrary-precision calculator bc. Simple, neat, end of story.

How might we do the same thing on Windows? There are plenty of approximation formulas and algorithms for computing digits of \pi, but to more precisely specify the requirements I was interested in, is there an algorithm that generates digits of \pi:

  • in any given base,
  • one digit at a time “indefinitely,” i.e., without committing to a fixed precision ahead of time,
  • with a relatively simple implementation,
  • using only arbitrary-precision integer arithmetic (such as is built into Python, or maybe C++ with a library)?


The Bailey-Borwein-Plouffe (BBP) formula seems ready-made for our purpose:

\pi = \sum_{k=0}^{\infty} \frac{1}{16^k} (\frac{4}{8k+1} - \frac{2}{8k+4} - \frac{1}{8k+5} - \frac{1}{8k+6})

This formula has the nice property that it may be used to efficiently compute the n-th hexadecimal digit of \pi, without having to compute all of the previous digits along the way. Roughly, the approach is to multiple everything by 16^n, then use modular exponentiation to collect and discard the integer part of the sum, leaving the fractional part with enough precision to accurately extract the n-th digit.

However, getting the implementation details right can be tricky. For example, this site provides source code and example data containing one million hexadecimal digits of \pi generated using the BBP formula… but roughly one out of every 55 digits or so is incorrect.

But suppose that we don’t want to “look ahead,” but instead want to generate all hexadecimal digits of \pi, one after the other from the beginning. Can we still make use of this formula in a simpler way? For example, consider the following Python implementation:

def pi_bbp():
    """Conjectured BBP generator of hex digits of pi."""
    a, b = 0, 1
    k = 0
    while True:
        ak, bk = (120 * k**2 + 151 * k + 47,
                  512 * k**4 + 1024 * k**3 + 712 * k**2 + 194 * k + 15)
        a, b = (16 * a * bk + ak * b, b * bk)
        digit, a = divmod(a, b)
        yield digit
        k = k + 1

for digit in pi_bbp():
    print('{:x}'.format(digit), end='')

The idea is similar to converting a fraction to a string in a given base: multiply by the base (16 in this case), extract the next digit as the integer part, then repeat with the remaining fractional part. Here a/b is the running fractional part, and a_k / b_k is the current term in the BBP summation. (Using the fractions module doesn’t significantly improve readability, and is much slower than managing the numerators and denominators directly.)

Now for the interesting part: although this implementation appears to behave correctly– at least for the first 500,000 digits where I stopped testing– it isn’t clear to me that it is always correct. That is, I don’t see how to prove that this algorithm will continue to generate correct hexadecimal digits of \pi indefinitely. Perhaps a reader can enlighten me.

(Initial thoughts: Since it’s relatively easy to show that each term a_k / b_k in the summation is positive, I think it would suffice to prove that the algorithm never generates an “invalid” hexadecimal digit that is greater than 15. But I don’t see how to do this, either.)

Interestingly, Bailey et. al. conjecture (see Reference 1 below) a similar algorithm that they have verified out to 10 million hexadecimal digits. The algorithm involves a strangely similar-but-slightly-different approach and formula:

def pi_bbmw():
    """Conjectured BBMW generator of hex digits of pi."""
    a, b = 0, 1
    k = 0
    yield 3
    while True:
        k = k + 1
        ak, bk = (120 * k**2 - 89 * k + 16,
                  512 * k**4 - 1024 * k**3 + 712 * k**2 - 206 * k + 21)
        a, b = (16 * a * bk + ak * b, b * bk)
        a = a % b
        yield 16 * a // b

Unfortunately, this algorithm is slower, requiring one more expensive arbitrary-precision division operation per digit than the BBP version.

Proven algorithms

Although the above two algorithms are certainly short and sweet, (1) they only work for generating hexadecimal digits (vs. decimal, for example), and (2) we don’t actually know if they are correct. Fortunately, there are other options.

Gibbons (Reference 2) describes an algorithm that is not only proven correct, but works for generating digits of \pi in any base:

def pi_gibbons(base=10):
    """Gibbons spigot generator of digits of pi in given base."""
    q, r, t, k, n, l = 1, 0, 1, 1, 3, 3
    while True:
        if 4 * q + r - t < n * t:
            yield n
            q, r, t, k, n, l = (base * q, base * (r - n * t), t, k,
                                (base * (3 * q + r)) // t - base * n, l)
            q, r, t, k, n, l = (q * k, (2 * q + r) * l, t * l, k + 1,
                                (q * (7 * k + 2) + r * l) // (t * l), l + 2)

The bad news is that this is by far the slowest algorithm that I investigated, nearly an order of magnitude slower than BBP on my laptop.

The good news is that there is at least one other algorithm, that is not only competitive with BBP in terms of throughput, but is also general enough to easily compute– in any base– not just the digits of \pi, but also e (the base of the natural logarithm), \phi (the golden ratio), \sqrt{2}, and others.

The idea is to express the desired value as a generalized continued fraction:

a_0 + \frac{b_1}{a_1 + \frac{b_2}{a_2 + \frac{b_3}{a_3 + \cdots}}}

where in particular \pi may be represented as

\pi = 0 + \frac{4}{1 + \frac{1^2}{3 + \frac{2^2}{5 + \cdots}}}

Then digits may be extracted similarly to the BBP algorithm above: iteratively refine the convergent (i.e., approximation) of the continued fraction until the integer part doesn’t change; extract this integer part as the next digit, then multiply the remaining fractional part by the base and continue. In Python:

def continued_fraction(a, b, base=10):
    """Generate digits of continued fraction a(0)+b(1)/(a(1)+b(2)/(...)."""
    (p0, q0), (p1, q1) = (a(0), 1), (a(1) * a(0) + b(1), a(1))
    k = 1
    while True:
        (d0, r0), (d1, r1) = divmod(p0, q0), divmod(p1, q1)
        if d0 == d1:
            yield d1
            p0, p1 = base * r0, base * r1
            k = k + 1
            x, y = a(k), b(k)
            (p0, q0), (p1, q1) = (p1, q1), (x * p1 + y * p0, x * q1 + y * q0)

This approach is handy because not only \pi, but other common constants as well, have generalized continued fraction representations in which the sequences (a_k), (b_k) are “nice.” To generate decimal digits of \pi:

for digit in continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1,
                                lambda k: 4 if k == 1 else (k - 1)**2, 10):
    print(digit, end='')

Or to generate digits of the golden ratio \phi:

for digit in continued_fraction(lambda k: 1,
                                lambda k: 1, 10):
    print(digit, end='')

Consuming blocks of a generator

Finally, once I got around to actually using the above algorithm to try to reproduce Chris’s original code generation script, I accidentally injected a bug that took some thought to figure out. Recall that the Blowfish cipher has a couple of (sets of) subkeys, each populated with a segment of the sequence of hexadecimal digits of \pi. So we would like to extract a block of digits, do something with it, then extract a subsequent block of digits, do something else, etc.

A simple way to do this in Python is with the built-in zip function, that takes multiple iterables as arguments, and returns a single generator that outputs tuples of elements from each of the inputs… and “truncates” to the length of the shortest input. In this case, to extract a fixed number of digits of \pi, we just zip the “infinite” digit generator together with a range of the desired length.

To more clearly see what happens, let’s simplify the context a bit and just try to print the first 10 decimal digits of \pi in two groups of 5:

digits = continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1,
                            lambda k: 4 if k == 1 else (k - 1)**2, 10)
for digit, k in zip(digits, range(5)):
    print(digit, end='')
for digit, k in zip(digits, range(5)):
    print(digit, end='')

This doesn’t work: the resulting output blocks are (31415) and (26535)… but \pi = 3.1415926535\ldots. We “lost” the 9 in the middle.

The problem is that zip evaluates each input iterator in turn, stopping only when one of them is exhausted. In this case, during the 6th iteration of the first loop, we “eat” the 9 from the digit generator before we realize that the range iterator is exhausted. When we continue to the second block of 5 digits, we can’t “put back” the 9.

This is easy to fix: just reverse the order of the zip arguments, so the range is exhausted first, before eating the extra element of the “real” sequence we’re extracting from.

for k, digit in zip(range(5), digits):
    print(digit, end='')
for k, digit in zip(range(5), digits):
    print(digit, end='')

This works as desired, with output blocks (31415) and (92653).


  1. Bailey, D., Borwein, J., Mattingly, A., Wightwick, G., The Computation of Previously Inaccessible Digits of \pi^2 and Catalan’s Constant, Notices of the American Mathematical Society, 60(7) 2013, p. 844-854 [PDF]
  2. Gibbons, J., Unbounded Spigot Algorithms for the Digits of Pi, American Mathematical Monthly, 113(4) April 2006, p. 318-328 [PDF]
Posted in Uncategorized | 2 Comments

Floating-point agreement between MATLAB and C++


A common development approach in MATLAB is to:

  1. Write MATLAB code until it’s unacceptably slow.
  2. Replace the slowest code with a C++ implementation called via MATLAB’s MEX interface.
  3. Goto step 1.

Regression testing the faster MEX implementation against the slower original MATLAB can be difficult. Even a seemingly line-for-line, “direct” translation from MATLAB to C++, when provided with exactly the same numeric inputs, executed on the same machine, with the same default floating-point rounding mode, can still yield drastically different output. How does this happen?

There are three primary causes of such differences, none of them easy to fix. The purpose of this post is just to describe these causes, focusing on one in particular that I learned occurs more frequently than I realized.

1. The butterfly effect

This is where the drastically different results typically come from. Even if the inputs to the MATLAB and MEX implementations are identical, suppose that just one intermediate calculation yields even the smallest possible difference in its result… and is followed by a long sequence of further calculations using that result. That small initial difference can be greatly magnified in the final output, due to cumulative effects of rounding error. For example:

x = 0.1;
for k in 1:100
    x = 4 * (1 - x);
% x == 0.37244749676375793

double x = 0.10000000000000002;
for (int k = 1; k <= 100; ++k) {
    x = 4 * (1 - x);
// x == 0.5453779481420313

This example is only contrived in its simplicity, exaggerating the “speed” of divergence with just a few hundred floating-point operations. Consider a more realistic, more complex Monte Carlo simulation of, say, an aircraft autopilot maneuvering in response to simulated sensor inputs. In a particular Monte Carlo iteration, the original MATLAB implementation might successfully dampen a severe Dutch roll, while the MEX version might result in a crash (of the aircraft, I mean).

Of course, for this divergence of behavior to occur at all, there must be that first difference in the result of an intermediate calculation. So this “butterfly effect” really is just an effect— it’s not a cause at all, just a magnified symptom of the two real causes, described below.

2. Compiler non-determinism

As far as I know, the MATLAB interpreter is pretty well-behaved and predictable, in the sense that MATLAB source code explicitly specifies the order of arithmetic operations, and the precision with which they are executed. A C++ compiler, on the other hand, has a lot of freedom in how it translates source code into the corresponding sequence of arithmetic operations… and possibly even the precision with which they are executed.

Even if we assume that all operations are carried out in double precision, order of operations can matter; the problem is that this order is not explicitly controlled by the C++ source code being compiled (edit: at least for some speed-optimizing compiler settings). For example, when a compiler encounters the statement double x = a+b+c;, it could emit code to effectively calculate (a+b)+c, or a+(b+c), which do not necessarily produce the same result. That is, double-precision addition is not associative:

(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3) % this is false

Worse, explicit parentheses in the source code may help, but it doesn’t have to.

Another possible problem is intermediate precision. For example, in the process of computing (a+b)+c, the intermediate result t=(a+b) might be computed in, say, 80-bit precision, before clamping the final sum to 64-bit double precision. This has bitten me in other ways discussed here before; Bruce Dawson has several interesting articles with much more detail on this and other issues with floating-point arithmetic.

3. Transcendental functions

So suppose that we are comparing the results of MATLAB and C++ implementations of the same algorithm. We have verified that the numeric inputs are identical, and we have also somehow verified that the arithmetic operations specified by the algorithm are executed in the same order, all in double precision, in both implementations. Yet the output still differs between the two.

Another possible– in fact likely– cause of such differences is in the implementation of transcendental functions such as sin, cos, atan2, exp, etc., which are not required by IEEE-754-2008 to be correctly rounded due to the table maker’s dilemma. For example, following is the first actual instance of this problem that I encountered years ago, reproduced here in MATLAB R2017a (on my Windows laptop):

x = 193513.887169782;
y = 44414.97148164646;
atan2(y, x) == 0.2256108075334872

while the corresponding C++ implementation (still called from MATLAB, built as a MEX function using Microsoft Visual Studio 2015) yields

#include <cmath>
std::atan2(y, x) == 0.22561080753348722;

The two results differ by an ulp, with (in this case) the MATLAB version being correctly rounded.

(Rant: Note that both of the above values, despite actually being different, display as the same 0.225610807533487 in the MATLAB command window, which for some reason neglects to provide “round-trip” representations of its native floating-point data type. See here for a function that I find handy when troubleshooting issues like this.)

What I found surprising, after recently exploring this issue in more detail, is that the above example is not an edge case: the MATLAB and C++ cmath implementations of the trigonometric and exponential functions disagree quite frequently– and furthermore, the above example notwithstanding, the C++ implementation tends to be more accurate more of the time, significantly so in the case of atan2 and the exponential functions, as the following figure shows.

Probability of MATLAB/C++ differences in function evaluation for input randomly selected from the unit interval.

The setup: on my Windows laptop, with MATLAB R2017a and Microsoft Visual Studio 2015 (with code compiled from MATLAB as a MEX function with the provided default compiler configuration file), I compared each function’s output for 1 million randomly generated inputs– or pairs of inputs in the case of atan2— in the unit interval.

Of those 1 million inputs, I calculated how many yielded different output from MATLAB and C++, where the differences fell into 3 categories:

  • Red indicates that MATLAB produced the correctly rounded result, with the exact value between the MATLAB and C++ outputs (i.e., both implementations had an error less than an ulp).
  • Gray indicates that C++ produced the correctly rounded result, with both implementations having an error of less than an ulp.
  • Blue indicates that C++ produced the correctly rounded result, between the exact value and the MATLAB output (i.e., MATLAB had an error greater than an ulp).

(A few additional notes on test setup: first, I used random inputs in the unit interval, instead of evenly-spaced values with domains specific to each function, to ensure testing all of the mantissa bits in the input, while still allowing some variation in the exponent. Second, I also tested addition, subtraction, multiplication, division, and square root, just for completeness, and verified that they agree exactly 100% of the time, as guaranteed by IEEE-754-2008… at least for one such operation in isolation, eliminating any compiler non-determinism mentioned earlier. And finally, I also tested against MEX-compiled C++ using MinGW-w64, with results identical to MSVC++.)

For most of these functions, the inputs where MATLAB and C++ differ are distributed across the entire unit interval. The two-argument form of the arctangent is particularly interesting. The following figure “zooms in” on the 16.9597% of the sample inputs that yielded different outputs, showing a scatterplot of those inputs using the same color coding as above. (The red, where MATLAB provides the correctly rounded result, is so infrequent it is barely visible in the summary bar chart.)

Scatterplot of points where MATLAB/C++ differ in evaluation of atan2(y,x), using the same color coding as above.


This might all seem rather grim. It certainly does seem hopeless to expect that a C++ translation of even modestly complex MATLAB code will preserve exact agreement of output for all inputs. (In fact, it’s even worse than this. Even the same MATLAB code, executed in the same version of MATLAB, but on two different machines with different architectures, will not necessarily produce identical results given identical inputs.)

But exact agreement between different implementations is rarely the “real” requirement. Recall the Monte Carlo simulation example described above. If we run, say, 1000 iterations of a simulation in MATLAB, and the same 1000 iterations with a MEX translation, it may be that iteration #137 yields different output in the two versions. But that should be okay… as long as the distribution over all 1000 outputs is the same– or sufficiently similar– in both cases.

Posted in Uncategorized | 3 Comments

What is (-1&3)?

This is just nostalgic amusement.  I recently encountered the following while poking around in some code that I had written a disturbingly long time ago:

switch (-1&3) {
    case 1: ...
    case 2: ...
    case 3: ...

What does this code do?  This is interesting because the switch expression is a constant that could be evaluated at compile time (indeed, this could just as well have been implemented with a series of #if/#elseif preprocessor directives instead of a switch-case statement).

As usual, it seems more fun to present this as a puzzle, rather than just point and say, “This is what I did.”  For context, or possibly as a hint, this code was part of a task involving parsing and analyzing digital terrain elevation data (DTED), where it makes at least some sense.

Posted in Uncategorized | 2 Comments

The following are equivalent

I have been reviewing Rosen’s Discrete Mathematics and Its Applications textbook for a course this fall, and I noticed an interesting potential pitfall for students in the first chapter on logic and proofs.

Many theorems in mathematics are of the form, “p if and only if q,” where p and q are logical propositions that may be true or false.  For example:

Theorem 1: An integer m is even if and only if m+2 is even.

where in this case p is “m is even” and q is “m+2 is even.”  The statement of the theorem may itself be viewed as a proposition p \leftrightarrow q, where the logical connective \leftrightarrow is read “if and only if,” and behaves like Boolean equality.  Intuitively, p \leftrightarrow q states that “p and q are (materially) equivalent; they have the same truth value, either both true or both false.”

(Think Boolean expressions in your favorite programming language; for example, the proposition p \land q, read “p and q,” looks like p && q in C++, assuming that p and q are of type bool.  Similarly, the proposition p \leftrightarrow q looks like p == q in C++.)

Now consider extending this idea to the equivalence of more than just two propositions.  For example:

Theorem 2: Let m be an integer.  Then the following are equivalent:

  1. m is even.
  2. m+2 is even.
  3. m-2 is even.

The idea is that the three propositions above (let’s call them p_1, p_2, p_3) always have the same truth value; either all three are true, or all three are false.

So far, so good.  The problem arises when Rosen expresses this general idea of equivalence of multiple propositions p_1, p_2, \ldots, p_n as

p_1 \leftrightarrow p_2 \leftrightarrow \ldots \leftrightarrow p_n

Puzzle: What does this expression mean?  A first concern might be that we need parentheses to eliminate any ambiguity.  But almost unfortunately, it can be shown that the \leftrightarrow connective is associative, meaning that this is a perfectly well-formed propositional formula even without parentheses.  The problem is that it doesn’t mean what it looks like it means.


  • Rosen, K. H. (2011). Discrete Mathematics and Its Applications (7th ed.). New York, NY: McGraw-Hill. ISBN-13: 978-0073383095
Posted in Uncategorized | 4 Comments