**Introduction**

This post was initially motivated by an interesting recent article by Chris Wellons discussing the Blowfish cipher. The Blowfish cipher’s subkeys are initialized with values containing the first 8336 hexadecimal digits of , the idea being that implementers may compute these digits for themselves, rather than trusting the integrity of explicitly provided “random” values.

So, how do we compute hexadecimal– or decimal, for that matter– digits of ? This post describes several methods for computing digits of and other well-known constants, as well as some implementation issues and open questions that I encountered along the way.

**Pi is easy with POSIX**

First, Chris’s implementation of the Blowfish cipher includes a script to automatically generate the code defining the subkeys. The following two lines do most of the work:

cmd='obase=16; scale=10040; a(1) * 4' pi="$(echo "$cmd" | bc -l | tr -d '\n\\' | tail -c+3)"

This computes base 16 digits of as `a(1) * 4`

, or 4 times the arctangent of 1 (i.e., ), using the POSIX arbitrary-precision calculator bc. Simple, neat, end of story.

How might we do the same thing on Windows? There are plenty of approximation formulas and algorithms for computing digits of , but to more precisely specify the requirements I was interested in, is there an algorithm that generates digits of :

- in any given base,
- one digit at a time “indefinitely,” i.e., without committing to a fixed precision ahead of time,
- with a relatively simple implementation,
- using only arbitrary-precision
*integer*arithmetic (such as is built into Python, or maybe C++ with a library)?

**Bailey-Borwein-Plouffe**

The Bailey-Borwein-Plouffe (BBP) formula seems ready-made for our purpose:

This formula has the nice property that it may be used to efficiently compute the -th hexadecimal digit of , without having to compute all of the previous digits along the way. Roughly, the approach is to multiple everything by , then use modular exponentiation to collect and discard the integer part of the sum, leaving the fractional part with enough precision to accurately extract the -th digit.

However, getting the implementation details right can be tricky. For example, this site provides source code and example data containing one million hexadecimal digits of generated using the BBP formula… but roughly one out of every 55 digits or so is incorrect.

But suppose that we don’t want to “look ahead,” but instead want to generate *all* hexadecimal digits of , one after the other from the beginning. Can we still make use of this formula in a simpler way? For example, consider the following Python implementation:

def pi_bbp(): """Conjectured BBP generator of hex digits of pi.""" a, b = 0, 1 k = 0 while True: ak, bk = (120 * k**2 + 151 * k + 47, 512 * k**4 + 1024 * k**3 + 712 * k**2 + 194 * k + 15) a, b = (16 * a * bk + ak * b, b * bk) digit, a = divmod(a, b) yield digit k = k + 1 for digit in pi_bbp(): print('{:x}'.format(digit), end='')

The idea is similar to converting a fraction to a string in a given base: multiply by the base (16 in this case), extract the next digit as the integer part, then repeat with the remaining fractional part. Here is the running fractional part, and is the current term in the BBP summation. (Using the `fractions`

module doesn’t significantly improve readability, and is *much* slower than managing the numerators and denominators directly.)

Now for the interesting part: although this implementation appears to behave correctly– at least for the first 500,000 digits where I stopped testing– it isn’t clear to me that it is *always* correct. That is, I don’t see how to prove that this algorithm will continue to generate correct hexadecimal digits of indefinitely. Perhaps a reader can enlighten me.

(Initial thoughts: Since it’s relatively easy to show that each term in the summation is positive, I think it would suffice to prove that the algorithm never generates an “invalid” hexadecimal digit that is greater than 15. But I don’t see how to do this, either.)

Interestingly, Bailey *et. al.* conjecture (see Reference 1 below) a similar algorithm that they have verified out to 10 million hexadecimal digits. The algorithm involves a strangely similar-but-slightly-different approach and formula:

def pi_bbmw(): """Conjectured BBMW generator of hex digits of pi.""" a, b = 0, 1 k = 0 yield 3 while True: k = k + 1 ak, bk = (120 * k**2 - 89 * k + 16, 512 * k**4 - 1024 * k**3 + 712 * k**2 - 206 * k + 21) a, b = (16 * a * bk + ak * b, b * bk) a = a % b yield 16 * a // b

Unfortunately, this algorithm is slower, requiring one more expensive arbitrary-precision division operation per digit than the BBP version.

**Proven algorithms**

Although the above two algorithms are certainly short and sweet, (1) they only work for generating hexadecimal digits (vs. decimal, for example), and (2) we don’t actually know if they are correct. Fortunately, there are other options.

Gibbons (Reference 2) describes an algorithm that is not only proven correct, but works for generating digits of in any base:

def pi_gibbons(base=10): """Gibbons spigot generator of digits of pi in given base.""" q, r, t, k, n, l = 1, 0, 1, 1, 3, 3 while True: if 4 * q + r - t < n * t: yield n q, r, t, k, n, l = (base * q, base * (r - n * t), t, k, (base * (3 * q + r)) // t - base * n, l) else: q, r, t, k, n, l = (q * k, (2 * q + r) * l, t * l, k + 1, (q * (7 * k + 2) + r * l) // (t * l), l + 2)

The bad news is that this is by far the slowest algorithm that I investigated, nearly an order of magnitude slower than BBP on my laptop.

The good news is that there is at least one other algorithm, that is not only competitive with BBP in terms of throughput, but is also general enough to easily compute– in any base– not just the digits of , but also (the base of the natural logarithm), (the golden ratio), , and others.

The idea is to express the desired value as a generalized continued fraction:

where in particular may be represented as

Then digits may be extracted similarly to the BBP algorithm above: iteratively refine the *convergent* (i.e., approximation) of the continued fraction until the integer part doesn’t change; extract this integer part as the next digit, then multiply the remaining fractional part by the base and continue. In Python:

def continued_fraction(a, b, base=10): """Generate digits of continued fraction a(0)+b(1)/(a(1)+b(2)/(...).""" (p0, q0), (p1, q1) = (a(0), 1), (a(1) * a(0) + b(1), a(1)) k = 1 while True: (d0, r0), (d1, r1) = divmod(p0, q0), divmod(p1, q1) if d0 == d1: yield d1 p0, p1 = base * r0, base * r1 else: k = k + 1 x, y = a(k), b(k) (p0, q0), (p1, q1) = (p1, q1), (x * p1 + y * p0, x * q1 + y * q0)

This approach is handy because not only , but other common constants as well, have generalized continued fraction representations in which the sequences are “nice.” To generate decimal digits of :

for digit in continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1, lambda k: 4 if k == 1 else (k - 1)**2, 10): print(digit, end='')

Or to generate digits of the golden ratio :

for digit in continued_fraction(lambda k: 1, lambda k: 1, 10): print(digit, end='')

**Consuming blocks of a generator**

Finally, once I got around to actually using the above algorithm to try to reproduce Chris’s original code generation script, I accidentally injected a bug that took some thought to figure out. Recall that the Blowfish cipher has a couple of (sets of) subkeys, each populated with a segment of the sequence of hexadecimal digits of . So we would like to extract a block of digits, do something with it, then extract a *subsequent* block of digits, do something else, etc.

A simple way to do this in Python is with the built-in zip function, that takes multiple iterables as arguments, and returns a single generator that outputs tuples of elements from each of the inputs… and “truncates” to the length of the shortest input. In this case, to extract a fixed number of digits of , we just `zip`

the “infinite” digit generator together with a range of the desired length.

To more clearly see what happens, let’s simplify the context a bit and just try to print the first 10 *decimal* digits of in two groups of 5:

digits = continued_fraction(lambda k: 0 if k == 0 else 2 * k - 1, lambda k: 4 if k == 1 else (k - 1)**2, 10) for digit, k in zip(digits, range(5)): print(digit, end='') print() for digit, k in zip(digits, range(5)): print(digit, end='')

This doesn’t work: the resulting output blocks are (31415) and (26535)… but . We “lost” the 9 in the middle.

The problem is that `zip`

evaluates each input iterator in turn, stopping only when one of them is exhausted. In this case, during the 6th iteration of the first loop, we “eat” the 9 from the digit generator *before* we realize that the `range`

iterator is exhausted. When we continue to the second block of 5 digits, we can’t “put back” the 9.

This is easy to fix: just reverse the order of the `zip`

arguments, so the `range`

is exhausted first, *before* eating the extra element of the “real” sequence we’re extracting from.

for k, digit in zip(range(5), digits): print(digit, end='') print() for k, digit in zip(range(5), digits): print(digit, end='')

This works as desired, with output blocks (31415) and (92653).

**References:**

- Bailey, D., Borwein, J., Mattingly, A., Wightwick, G., The Computation of Previously Inaccessible Digits of and Catalan’s Constant,
*Notices of the American Mathematical Society*,**60**(7) 2013, p. 844-854 [PDF] - Gibbons, J., Unbounded Spigot Algorithms for the Digits of Pi,
*American Mathematical Monthly*,**113**(4) April 2006, p. 318-328 [PDF]

I like that you’ve compared several algorithms here. In a casual web search, each algorithm is generally only examined in isolation, and it takes further study and implementation to determine their relative trade-offs. That’s where I got stuck in my own (admittedly brief) survey while I was really trying to focus on implementing the cipher.

(Also, thanks for including working code since that’s also missing from those other articles!)

Thanks! I am particularly curious about the first, “not-as-intended” BBP implementation; if you have been able to find documentation describing that approach (as opposed to the “find the n-th digit skipping the first n-1” approach), proving its correctness, etc., please let me know.

Pingback: Share The π: Honoring Neglected Mathematical Constants – Data Science Austria

Pingback: Share The π: Honoring Neglected Mathematical Constants - JukeLogic

Here is the site with the BBP correctness proof for the exact algo you use: http://www.quantresearch.org/ams_notices.pdf

This is the same paper as Reference 1 at the end of the post (linked directly to ams.org). It isn’t clear which algorithm you are referring to, since I mention two of them, both discussed in the article. For the first one, pi_bbp(), proving that it generates correct digits is a stronger claim than the proof on p. 847 that the infinite summation equals . For the second one, pi_bbmw(), its correctness is only conjectured in the article (Conjecture 1 on p. 850).

First of all, thank you for the article. This is exactly what I need. Perhaps I was using the function “continued_fraction” and it looks like the first digit of binary pi, for example, is 3 instead of 1. In fact, it is true: 3*2^0 = 3 so it works. However, I think that would be nice to have it in binary.

Thanks.