Another dice puzzle

Introduction

The dice game craps is played by repeatedly rolling two six-sided dice, and making various wagers on the outcomes of the sequence of rolls. The details of the wagers don’t concern us here– instead, let’s consider just one particular example scenario involving a common wager called the “pass line bet”:

Suppose that we have just rolled a 4 (which, by the way, occurs with probability 3/36 on any single roll). Having thus established 4 as “the point,” our objective now is to roll a 4 again, rolling repeatedly if necessary… but if we ever roll a 7 (which, by the way, occurs with probability 6/36 on any single roll), then we lose. If and once we roll a 4, we win.

To summarize: we are trying to roll a 4 (with probability 3/36). If we roll anything else except a 7 (where “rolling anything else except 7” has probability 27/36), we continue rolling.

So, here is a puzzle, let’s call it Problem 1: how long does it take on average to win? More precisely, what is the expected length of a winning sequence of rolls, i.e., where we never roll a 7?

Thoughts

This problem is not new, nor is it even particularly sophisticated. But it is very similar to a problem that circulated a few years ago, and that generated some interesting discussion. Here is that earlier problem, let’s call it Problem Zero:

Roll a single fair six-sided die repeatedly until you roll a 6. What is the expected number of rolls required, given that we observe that all rolls are even?

My motivation for this post is two-fold. First, this is a sort of pedagogical thought experiment. Problem Zero has already been shown in the wild to be dangerously non-intuitive. Problem 1 is the same problem— that is, it is essentially equivalent to Problem Zero, just dressed up with different values for the single-trial probabilities. But is Problem 1 inherently easier, less “tricky?” And if so, why? Is it because the numbers are different, or is it that the problem is cast in a more concrete setting as a casino game, etc.?

I don’t know if these problems are actually equally “tricky” or not. But at least in the case of the known trickiness of Problem Zero, I have a theory. Both of these problems are like many others that I have enjoyed discussing here in the past, in that they are mathematical problems… but of a sort that a student may approach not just with pencil and paper, but also with a computer, even if only as an initial exploratory tool.

Which brings me to my theory, based on observation of past debate of Problem Zero. We can begin tackling this problem with pencil and paper, or by writing a simulation… and I suspect that, in this case, starting with a simulation makes it much harder to come up with a (the?) wrong answer.

Posted in Uncategorized | Leave a comment

Tracking a card through a shuffled deck

Introduction

Riffle shuffle a deck of cards once. What is the probability that the original top card ends up on the bottom of the shuffled deck (or vice versa)? This is very unlikely… but suppose that we shuffle again… and again, etc. How many shuffles on average are required until we first see the original top card moved to the bottom of the deck? The motivation for this post is to capture my notes on this problem, which turns out to have a very nice solution.

Approach

To begin, we use the Gilbert-Shannon-Reeds (GSR) model of a riffle shuffle, that captures– in a realistic but mathematically tractable way– the random imperfections in how most non-expert humans shuffle cards: by cutting the deck roughly in half, then interleaving the cards back together from the two halves, with some clumping.

There are several different characterizations of the GSR model, all yielding the same probability distribution of possible permutations of the cards in the deck. For our purpose, the most convenient way to model a single random GSR shuffle of a deck with n=52 cards is as a uniformly random string of n bits (imagine flipping a fair coin n times). The number of zero bits indicates how many cards to cut from the top of the deck, and the positions of the zero bits indicate how those top cards are interleaved with the cards from the bottom part of the deck (represented by the one bits). The following Python code illustrates how this works:

import numpy as np

def riffle_shuffle(cards, rng=np.random.default_rng()):
    bits = rng.integers(0, 2, len(cards))
    return [cards[k] for k in np.argsort(np.argsort(bits, kind='stable'))]

print(riffle_shuffle(range(52)))

This representation of a riffle shuffle as a bit string makes it easy to calculate probabilities of various types of shuffles. For example, of all 2^n possible equally likely GSR shuffles, how many move the card from position i down to position j, where 1 \leq i<j \leq n? This number s(i,j) is equal to the number of bit strings with at least i zeros (since the card we want to move down must be in the top half of the cut), and exactly j-i ones before the i-th zero. That is,

s(i,j) = \sum\limits_{k=i}^n {j-1 \choose i-1} {n-j \choose k-i}

It’s an exercise for the reader to show that we can extend this approach to also count shuffles that move a card up in the deck (i>j), or that fix a card (i=j) so that it remains in its original position, with the result being a transition matrix P given by

P_{i,j} = \frac{s(i,j) + s(n+1-i,n+1-j)}{2^n}, 1 \leq i,j \leq n

where P_{i,j} indicates the probability that a single GSR shuffle will move the card in position i to position j.

At this point, we have what we need: consider a Markov chain with n states corresponding to the current position of a particular distinguished card in the deck. If this distinguished card starts in position, say, i=1 (i.e., the top card of the deck), then our initial state distribution \pi_0 is the i-th basis (row) vector, and we can track the distribution of possible future positions of the card after k shuffles as \pi_0 P^k.

But we originally asked for the average number of shuffles needed to move the card initially at location i=1 to location j=n. To calculate this, let Q be the matrix obtained from P by deleting row j and column j, and compute the fundamental matrix N=(I-Q)^{-1}. Then the expected number of shuffles until the target card starting in position i first reaches position j is the sum of the values in the i-th row of N (or the i-1-st row if i>j).

Solution

The answer is interesting and perhaps surprising. First: it typically takes a long time for the top card to reach the bottom of the deck, over 100 shuffles on average. Second: this average is actually exactly 104 shuffles! More generally, given a deck with n cards, the expected number of GSR shuffles to move the top card to the bottom of the deck appears to be exactly 2n. This suggests once again that there may be a nice intuitive argument for this result that I don’t yet see.

Finally, and perhaps most surprising: it takes about that same hundred-or-so shuffles on average to move any card to the bottom of the deck!

Expected number of shuffles to move a card to the bottom of a 52-card deck.

In the above figure, the x-axis indicates the starting position of the card that we want to track. The y-axis indicates the expected number of shuffles needed to move that card to the bottom of the deck. For example, even the next-to-bottom card of the deck takes about 102.6 shuffles on average to reach the bottom of the deck.

Posted in Uncategorized | 4 Comments

A different sock matching problem

When I take a load of laundry from the dryer, there are socks mixed in with all of the other clothes. Suppose that there are n pairs of socks, 2n socks in total, that I match and fold by randomly drawing one sock at a time from the basket. If the sock matches one that I have already drawn, I fold the matching pair and put it away in the sock drawer. Otherwise, I set the unmatched sock aside, anticipating matching it later. How much space does this take? That is, let U be the maximum number of unmatched socks set aside at any point during this process. What is the distribution of U?

There are at least a couple of different possible problems here, depending on what constitutes a matching pair of socks. Arguably the most natural setup is that all n pairs are distinct (e.g., each pair of my dress socks is a different color), so that each individual sock has exactly one mate. This is what has been described as the sock matching problem in the literature; see the references below.

My athletic socks, on the other hand, are essentially n identical pairs, with each individual sock being distinguished only by a “left” or “right” label stitched into it, so that each sock may be matched with any of the n other “differently-handed” socks. In this case, it’s a nice problem to show that

P(U \geq u) = \frac{1}{{2n \choose n}} \cdot 2\sum\limits_{k=1}^{\lfloor n/u \rfloor} (-1)^{k-1}{2n \choose n-ku}

and thus

E[U] = \frac{1}{{2n \choose n}} \cdot 2\sum\limits_{u=1}^n \sum\limits_{k=1}^{\lfloor n/u \rfloor} (-1)^{k-1}{2n \choose n-ku}

But what I found most interesting about this problem is that E[U] appears to be very well approximated by \sqrt{\frac{3}{2}n}, with an error that I conjecture is always less than 1/2, and approaches zero in the limit as n grows large. I don’t see how to prove this, though.

References:

  1. Gilliand, S., Johnson, C., Rush, S., and Wood, D., The sock matching problem, Involve, 7(5) 2014, p. 691-697. [PDF]
  2. Pantic, B. and Bodroza-Pantic, O., A brief overview of the sock matching problem. [arXiv]
  3. OEIS Foundation Inc. (2019), The On-Line Encyclopedia of Integer Sequences [A225177]
Posted in Uncategorized | 9 Comments

The hot hand

The wager described in the previous post was motivated by an interesting recent paper (Miller and Sanjurjo, reference below) discussing the hot hand, or the perception in some sports, particularly basketball, that a player with a streak of successful shots (“He’s heating up!”) has an increased probability of making a subsequent shot and continuing the streak (“He’s on fire!”).

Whether being on a streak yourself, or trying to defend a player on a streak, on the court this certainly feels like a real phenomenon. But is the hot hand a real effect, or just another example of our human tendency to see patterns in randomness?

A famous 1985 paper (Gilovich, Vallone, and Tversky, reference below) argued the latter, analyzing the proportion of successful shots immediately following a streak of three made shots in various settings (NBA field goals, free throws, and a controlled experiment with college players). Not finding any significant increase in proportion of “streak-extending” shots made, the apparent conclusion would be that a past streak has no effect on current success.

But that’s where this puzzle comes in: even if basketball shots are truly iid with success probability p, we should expect a negative bias in the proportion of shots made following a streak, at least compared to the intuitively expected proportion p. Miller and Sanjurjo argue that the absence of this bias in the 1985 data suggests that the hot hand is not just “a cognitive illusion.”

Both papers are interesting reads. In presenting the problem here as a gambling wager, I simplified things somewhat down to a “win, lose, or push” outcome (i.e., were there more streak-extending successes than failures, or fewer), since the resulting exact expected return can be computed more efficiently than the expected proportion of successes following a streak:

Given n remaining trials (basketball shots, coin flips, whatever) with success probability p, noting outcomes following streaks of length s, and winning (losing) the overall wager if the number of streak-extending successes is greater (less) than the number of streak-ending failures, the expected return is v(n,0,0), computed recursively via

v(0,r,w) = sgn(w)

v(n,s,w) = p v(n-1,r,w+1)+(1-p)v(n-1,0,w-1)

v(n,r,w) = p v(n-1,r+1,w)+(1-p)v(n-1,0,w)

where, using the setup in the previous post where we flip n=100 fair coins with probability p=1/2 of heads, looking for heads extending streaks of length s=4, the expected return v(100,0,0) is about -0.150825.

References:

  1. Gilovich, T., Vallone, R., and Tversky, A., The Hot Hand in Basketball: On the Misperception of Random Sequences, Cognitive Psychology, 17(3) July 1985, p. 295-314 [PDF]
  2. Miller, Joshua B. and Sanjurjo, Adam, Surprised by the Hot Hand Fallacy? A Truth in the Law of Small Numbers, Econometrica, 86(6) November 2018, p. 2019-2047 [arXiv]
Posted in Uncategorized | Leave a comment

Gambler’s fallacy fallacy?

The gambler’s fallacy is the belief that roulette wheels, dice, etc., have “memory,” so that, for example, having observed an unlikely streak of losses, the probability that the next outcome will be a win has increased, as a correction toward the expected long-term trend. The Wikipedia page provides a good example involving repeatedly flipping a fair coin:

“If after tossing four heads in a row, the next coin toss also came up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is [only] 1/32, a person might believe that the next flip would be more likely to come up tails rather than heads again. This is incorrect and is an example of the gambler’s fallacy.”

That is, having observed a streak of four heads in a row, we are actually just as likely to observe heads again on the subsequent fifth flip as we are to observe tails. Similarly, even after betting on red at the roulette wheel and losing four times in a row, we should still expect to win a fifth such bet on red the same stubborn 18/38 of the time (assuming a typical double-zero American wheel).

Right?

So, here is what I think is an interesting puzzle: let’s play a game. I will flip a fair coin n=100 times, and prior to each flip, if you have observed a current streak of s=4 or more consecutive heads, then make a note of the outcome of the subsequent flip. After all 100 coin flips, tally the noted “streak-following” flips: if there are more heads than tails, then I pay you one dollar. If there are more tails than heads, then you pay me one dollar. (If there are an equal number of heads and tails, then we push.)

If the gambler’s fallacy is indeed a fallacy, then shouldn’t this be a fair bet, i.e., with net zero expected return? But I claim that I have a significant advantage in this game, taking more than 15 cents from you on average every time we play! Following a streak of heads, we expect to observe a much larger proportion of “trend-correcting” tails than “streak-extending” heads.

And there is nothing special or tricky about this particular setup. Try this experiment with a different number n of coin flips, or a longer or shorter “target” streak length s, or even a roulette-like bias on the coin flip probability. Or instead of focusing only on streaks of consecutive heads (i.e., ignoring streaks of tails), look for streaks of either s or more heads or s or more tails, and note whether the subsequent flip is different. The effect persists: on average, we observe a larger-than-expected proportion of outcomes that tend to end the streak.

Posted in Uncategorized | 10 Comments

Reproducing randomness

Introduction

We often need “random” numbers to simulate effects that are practically non-deterministic, such as measurement noise, reaction times of human operators, etc. However, we also need to be able to reproduce experimental results– for example, if we run hundreds of Monte Carlo iterations of a simulation, and something weird happens in iteration #137, whether a bug or an interesting new behavior, we need to be able to go back and reproduce that specific weirdness, exactly and repeatably.

Pseudo-random number generators are designed to meet these two seemingly conflicting requirements: generate one or more sequences of numbers having the statistically representative appearance of randomness, but with a mechanism for exactly reproducing any particular such sequence repeatably.

The motivation for this post is to describe the behavior of pseudo-random number generation in several common modeling and simulation environments: Python, MATLAB, and C++. In particular, given a sequence of random numbers generated in one of these environments, how difficult is it to reproduce that same sequence in another environment?

Mersenne Twister

In many programming languages, including Python, MATLAB, and C++, the default, available-out-of-the-box random number generator is the Mersenne Twister. Unfortunately, despite its ubiquity, this generator is not always implemented in exactly the same way, which can make reproducing simulated randomness across languages tricky. Let’s look at each of these three languages in turn.

(As an aside, it’s worth noting that the Mersenne Twister has an interesting reputation. Although it’s certainly not optimal, it’s also not fundamentally broken, but lately it seems to be fashionable to sniff at it like it is, in favor of more recently developed alternatives. Those alternatives are sometimes justified by qualitative arguments or even quantitative metrics that are, in my opinion, often of little practical consequence in many actual applications. But that is another much longer post.)

Python

Python is arguably the easiest environment in which to reproduce the behavior of the original reference C implementation of the Mersenne Twister. Python’s built-in random.random(), as well as Numpy’s legacy numpy.random.random(), both use the same reference genrand_res53 method for generating a double-precision random number uniformly distributed in the interval [0,1):

  1. Draw two 32-bit unsigned integer words (w_1, w_2).
  2. Concatenate the high 27 bits of w_1 with the high 26 bits of w_2.
  3. Divide the resulting 53-bit integer by 2^{53}, yielding a value in the range [0, 1-2^{-53}].

So given the same generator state, both the built-in and Numpy implementations yield the same sequence of double-precision values as the reference implementation.

Seeding is slightly messier, though. The reference C implementation provides two methods for seeding: init_genrand takes a single 32-bit word as a seed, while init_by_array takes an array of words of arbitrary length. Numpy’s numpy.random.seed(s) behaves like init_genrand when the provided seed is a single 32-bit word. However, the built-in random.seed(s) always uses the more general init_by_array, even when the provided seed is just a single word… which yields a different generator state than the corresponding call to init_genrand.

MATLAB

Although MATLAB provides several different generators, the Mersenne Twister has been the default since 2005. Its seeding method rng(s) is “almost” exactly like the reference init_genrand(s)— and thus like numpy.random.seed(s)— accepting a single 32-bit word as a seed… except that s=0 is special, being shorthand for the “default” seed value s=5489.

The rand() function is also “almost” identical to the reference genrand_res53 described above… except that it returns random values in the open interval (0,1) instead of the half-open interval [0,1). There is an explicit rejection-and-resample if zero is generated. It’s an interesting question whether this has ever been observed in the wild: it’s confirmed not to occur anywhere in the first 2^{21} random draws from any of the sequences indexed by the 2^{32} possible single-word seeds.

C++

Both Visual Studio and GCC implementations of the Mersenne Twister in std::mt19937 (which is also the std::default_random_engine in both cases) allow for the same single-word seeding as the reference init_genrand (and numpy.random.seed, and MATLAB’s rng excepting the special zero seed behavior mentioned above).

However, the C++ approach to generating random values from std::uniform_real_distribution<double> is different from the reference implementation used by Python and MATLAB:

  1. Draw two 32-bit unsigned integer words (w_1, w_2).
  2. Concatenate all 32 bits of w_2 with all 32 bits of w_1.
  3. Divide the resulting 64-bit integer by 2^{64}.

By using all 64 bits, this has the effect of allowing generation of more than just 2^{53} equally-separated possible values, retaining more bits of precision for values in the interval (0,1/2) where some leading bits are zero.

Another important difference here from the reference implementation is that the two random words are reversed: the first 32-bit word forms the least significant 32 bits of the fraction, and the second word forms the most significant bits. I am not sure if there is any theoretically justified reason for this difference. One possible practical justification, though, might be to prevent confusion when comparing with the reference implementation: even for the same seed, these random double-precision values “look” nothing at all like those from the reference implementation. Without the reversal, this sequence would be within the eyeball norm of the reference, matching in the most significant 27 or more bits, but not yielding exactly the same sequence.

Posted in Uncategorized | Leave a comment

The distribution of “roll and keep” dice

Introduction

Some role-playing games involve a “roll and keep” dice mechanic, where you roll some number of dice, but only “keep” a specified number of them with the largest values, where the result of the roll is the sum of the kept dice. For example, rolling five dice and keeping the largest three (sometimes denoted 5d6k3) could yield a score between 3 and 18, inclusive. What is the probability of each possible score?

The motivation for this post is (1) to capture my notes on the general solution to this problem, and (2) to describe some potential optimizations in implementation to handle very large instances of the problem, such as this Hacker Rank version of the problem, where we may be rolling as many as 10,000 dice.

The solution

To start, let’s simplify the problem by just rolling and keeping all of the dice without discarding any, since the order statistics are what make this problem complicated. Given n dice each with d sides, the number r_{n,d}(s) of equally likely ways to roll a score s is given by

r_{0,0}(s) = 1

r_{n,d}(0) = 0

r_{n,d}(s) = \sum\limits_{k=0}^{\lfloor\frac{s-n}{d}\rfloor} (-1)^k {n \choose k} {{s-k d-1} \choose {n-1}}

so that the probability of rolling a score s is r_{n,d}(s)/d^n.

If we now consider keeping only m \leq n of the dice, then we can group the r_{n,d,m}(s) desired outcomes with score s by:

  • the smallest value a among the m largest dice that we keep,
  • the number b of kept dice that are strictly greater than a, and
  • the number c of discarded dice that are strictly less than a.

This yields the following summation:

r_{n,d,m}(s) = \sum\limits_{a=1}^d \sum\limits_{b=0}^{m-1} \sum\limits_{c=0}^{n-m} {n \choose {b,c}} (a-1)^c r_{b,d-a}(s-a m)

Implementation details

At this point, we’re done… except that if the number n of dice rolled is large, then a straightforward implementation of this nested summation will be pretty slow, involving calculation of a large number of very large multinomial coefficients.

My first thought was to speed up calculation of those very large coefficients via their prime factorization using Legendre’s formula. The paper by Goetgheluck linked at the end of this post describes a more optimized approach, but the following simple Python implementation is already significantly faster than the usual iterative algorithm for sufficiently large inputs (the implementation of primes(n) is left as an exercise for the reader, or a future post):

def binomial(n, k):
    """Large binomial coefficient n choose k."""
    if k < 0 or k > n:
        return 0
    c = 1
    for p in primes(n):
        c = c * p ** (v_fact(n, p) - v_fact(k, p) - v_fact(n - k, p))
    return c

def v_fact(n, p):
    """Largest power of prime p dividing n factorial."""
    v = 0
    while n > 0:
        n = n // p
        v = v + n
    return v

(It’s worth noting that the Hacker Rank problem actually only asks for the number of ways to roll a given total modulo a prime, suggesting that a similar but even more effective optimization using Lucas’s theorem is probably intended.)

But we can do much better than this, by observing that in the “usual” iterative algorithm, all of the intermediate products inside the loop are also binomial coefficients… and we need all of them for this problem. That is, we can rewrite the summation as

r_{n,d,m}(s) = \sum\limits_{a=1}^d \sum\limits_{b=0}^{m-1} {n \choose b} r_{b,d-a}(s-a m) \sum\limits_{c=0}^{n-m} {n-b \choose c} (a-1)^c

and then “embed” the usual iterative calculation of the binomial coefficients inside the nested summations. The following implementation eliminates all but a single explicit calculation of a binomial coefficient:

def rolls_keep(s, n, d, m):
    """Ways to roll sum s with n d-sided dice keeping m largest."""
    result = 0
    for a in range(1, d + 1):
        choose_b = 1
        for b in range(0, m):
            sum_c = 0
            choose_c = 1
            pow_c = 1
            for c in range(0, n - m + 1):
                sum_c = sum_c + choose_c * pow_c
                choose_c = choose_c * (n - b - c) // (c + 1)
                pow_c = pow_c * (a - 1)
            result = result + choose_b * rolls(s - a * m, b, d - a) * sum_c
            choose_b = choose_b * (n - b) // (b + 1)
    return result

def rolls(s, n, d):
    """Ways to roll sum s with n d-sided dice."""
    if s == 0 and n == 0:
        return 1
    if d == 0:
        return 0
    result = 0
    choose_k = 1
    for k in range(0, (s - n) // d + 1):
        result = result + (-1) ** k * choose_k * binomial(s - k * d - 1, n - 1)
        choose_k = choose_k * (n - k) // (k + 1)
    return result

Reference:

  1. Goetgheluck, P., Computing Binomial Coefficients, The American Mathematical Monthly, 94(4) April 1987, p. 360-365 [JSTOR]
Posted in Uncategorized | Leave a comment