How many times should you riffle shuffle a deck of cards? A commonly cited rule of thumb (see , as well as here, here, and here) is that seven riffle shuffles are sufficient to randomize a standard 52-card deck. The motivation for this post is to refine this in a couple of ways: first, even after seven riffle shuffles, enough order still remains in the deck that we can exploit it with a reasonably simple wager (see ). This seems to suggest that we need more than seven shuffles– and usually we do– but it is possible, at least in principle, to repeatedly riffle shuffle in such a way that (a) we can tell when it’s okay to stop, (b) sometimes after just seven or even as few as six shuffles, that (c) not just approximately but perfectly randomizes the deck.
Betting on seven shuffles
Alice and Bob are playing a game. They begin with a brand new deck of playing cards, with the cards in the standard “new deck order”:
The deck is shuffled, and cards are dealt one at a time from the top of the deck, placing each card dealt back on the bottom of the deck. As the cards are dealt, Alice is looking for the cards from the original top half of the deck: ace through king of hearts, followed by ace through king of clubs, in that order. Meanwhile, Bob is looking for the cards from the original bottom half of the deck, but in reverse order: ace through king of spades, followed by ace through king of diamonds.
When Alice’s first target card, the ace of hearts, is dealt, instead of returning it to the bottom of the deck, we remove it and set it aside in Alice’s pile. Then, once the two of hearts is dealt, we remove it and add it to Alice’s pile, etc. Similarly, when Bob’s first target card, the ace of spades, is dealt, we remove it and start Bob’s pile. The first player to complete his or her pile of 26 cards wins the game, receiving one dollar from the loser.
This should be a fair game, assuming that the deck is truly and thoroughly shuffled: Alice or Bob should each win with probability 1/2. However, starting from a new deck, even after riffle shuffling seven times, Alice wins over 80% of the time!
Reference  is accessible to undergraduates, and describes a beautiful formula for computing these probabilities of winning as a function of the number of initial riffle shuffles. But I think this game also makes a great simulation programming exercise, both to simulate the random riffle shuffles themselves, and to efficiently determine whether Alice or Bob wins given a particular shuffled arrangement of cards.
Riffle shuffles and inverse shuffles
The game described above suggests that seven shuffles is not enough to randomize the deck. So, how many more shuffles do we need?
First, let’s review the Gilbert-Shannon-Reeds model of a random riffle shuffle. As discussed recently here, we can represent a riffle shuffle of a deck of cards as a uniformly random string of bits. The number of zero bits indicates how many cards to cut from the top of the deck, and the positions of the zero bits indicate how those top cards are interleaved with the cards from the bottom part of the deck (represented by the one bits).
We can associate each such bit string encoding of a riffle shuffle with the corresponding permutation , indicating that the riffle shuffle moves the card initially in position to position . Repeated shuffling corresponds to composition, so that the effect of riffle shuffle followed by shuffle is .
For reasons we will see shortly, it is also useful to consider the action of the inverse permutation associated with a particular bit string encoding of a riffle shuffle. Imagine marking each card in the deck with the corresponding zero or one in the bit string; then “deinterleave,” sliding the “zero” cards out while preserving their relative order, and placing them on top of the remaining pack of “one” cards. The reader can verify that this “inverse shuffle” permutation is indeed the inverse of the shuffle permutation corresponding to the same bit string encoding.
It turns out that inverse riffle shuffles are really handy, because of the following result (Lemma 9 in ). Suppose that we start with a new deck of cards, and repeatedly inverse shuffle the deck as follows:
- Generate a random bit string, and mark each card with its corresponding 0 or 1 label.
- Inverse shuffle the deck according to this bit string encoding; i.e., slide the 0 cards out and place them on top of the 1 cards.
- Repeat steps 1 and 2… but in step 1, place each new randomly generated 0 or 1 label to the left of the previous labels on the card (from the previous inverse shuffles), effectively prepending a new most significant bit. Thus, during the -th inverse shuffle, each card will have a -bit integer label, and the execution of step 2 corresponds to a (stable) sorting of the cards by these integer labels.
If we continue inverse shuffling in this manner, stopping when we observe that the integer labels on the cards are all distinct, then the resulting (inverse) shuffled deck is fully randomized– that is, the resulting arrangement of cards in the deck is perfectly uniformly distributed. (More precisely, if the random variable is the minimum number of inverse shuffles required for all of the card labels to be distinct, then is a strong uniform stopping time.)
Shuffling backward in time
We’re not quite done yet. So far we have only described a method of inverse shuffling, and detecting when we can stop inverse shuffling, confident that the resulting arrangement of cards is perfectly randomized. How can we apply this to normal riffle shuffling?
The key observation (derived from Lemma 8 in ) is that the sequence of randomly generated “forward in time” riffle shuffles results in a distribution of deck arrangements with the same distance from uniform as the sequence of corresponding inverse shuffles, in reverse order (i.e., executed “backward in time”).
Thus, as we riffle shuffle the deck, first with permutation , then with , etc., we can evaluate our stopping condition after shuffles by checking for distinct card labels resulting from the inverse shuffles , executed in that (reverse) order.
The following Python code implements this method, returning a list of individual riffle shuffles– as the corresponding bit strings– that when executed in order realizes a uniformly random permutation. Note that that
reversed() is critical; without it, the resulting distribution of possible arrangements is observably non-uniform.
import numpy as np def uniform_riffle_shuffles(n, rng=np.random.default_rng()): """Return list of encodings of riffle shuffles of deck of n cards.""" riffles =  while True: # Generate another riffle shuffle. riffle = rng.integers(0, 2, n) riffles.append(riffle) # Perform all inverse shuffles in reverse order. labels = np.zeros_like(np.arange(n)) for bit, riffle in enumerate(reversed(riffles)): p = np.argsort(riffle, kind='stable') labels = (labels + riffle * (2 ** bit))[p] # Stop when card labels are all distinct. if len(set(labels)) == n: break return riffles
To wrap up, how long does this perfectly random shuffling process take? This turns out to be an instance of the birthday problem: if the random variable indicates the number of shuffles required to randomize a deck with cards, then the cumulative distribution function is the probability that card labels (think people), each of which is an -bit integer (think possible birthdays), are all distinct:
where is the falling factorial. The following figure shows the resulting distribution, with the CDF in blue, the PDF in red, and the mean of approximately 11.7243 shuffles in black.
- Aldous, D. and Diaconis, P., Shuffling Cards and Stopping Times, The American Mathematical Monthly, 93(5) 1986, p. 333-348 [PDF]
- Zuylen, A. and Schalekamp, F., The Achilles’ Heel of the GSR Shuffle: A Note on New Age Solitaire, Probability in the Engineering and Informational Sciences, 18(3) July 2004, p. 315-328 [DOI]