Why fewer decks are better for the blackjack player

This is a follow-up to last week’s post, which ended with the question, “Why are fewer decks better for the player in blackjack?” mostly still unresolved.  After another week to think about the problem and explore some ideas, I think we can now answer this question with some confidence.

To make this self-contained at the expense of some repetition from last week, let’s begin at the beginning.  Assuming a fixed set of rules common in many casinos– the particular choice of rules matters little for our purpose here– the following plot shows the player’s optimal expected return for a single round of blackjack, in percent of initial wager, as a function of the number of decks in the shoe.

Figure 1. Expected value in percent of initial wager vs. number of decks, assuming optimal composition-dependent (CDZ-) strategy. Rules are S17, DOA, DAS, SPL3 (no RSA), no surrender.

Note that the number of decks are indicated on a logarithmic scale, to emphasize the asymptotic behavior of expected return as the number of decks grows large.  The question is, why does the player’s expected return increase in games with fewer decks?

The “standard” answer is that fewer decks make blackjacks more likely, as suggested by the relevant Wikipedia page:

“All things being equal, using fewer decks decreases the house edge. This mainly reflects an increased likelihood of player blackjack, since if the players draws a ten on their first card, the subsequent probability of drawing an ace is higher with fewer decks. It also reflects a decreased likelihood of blackjack-blackjack push in a game with fewer decks.”

This is a real (and easily calculated) effect, but only a minor secondary one.  As we did last week, we can show this by considering a modified game, where the bonuses and penalties associated with blackjacks are removed, and showing that the trend in expected value vs. number of decks still remains.  The following plot shows this trend with the points in red, with the points in blue corresponding to the normal game as in Figure 1.

Figure 2. Expected value vs. number of decks for the normal game (in blue) and for the game with no blackjack bonuses or penalties (in red).

As you can see, the absolute expected return in a “blackjack-less” game is miserable, but that’s not the point.  The point is that the trend remains; even with no blackjacks, fewer decks are still better for the player.  To see this more clearly (here and in what follows), instead of plotting absolute expected values, let’s normalize the data relative to an “infinite shoe,” and plot the gain in expected value for a particular number of decks, compared with the expected value for an infinite number of decks (assuming the same rules and playing strategy).

Figure 3. Gain in expected value (relative to an infinite shoe) vs. number of decks, for the normal game (in blue) and for the game with no blackjack bonuses or penalties (in red).

At this point last week, I speculated that the main reason that fewer decks are better is the greater opportunity to vary “composition-dependent” playing strategy, where decisions to stand, hit, etc., may depend not only on the player’s hand total, but on how that total is composed (e.g., is a hard 16 made up of 10-6, or 10-3-3, or 8-4-4, etc.?).  With a sufficiently large number of decks, strategy is effectively just “total-dependent,” since the composition of the hand has a negligible effect on the probability distribution of card ranks remaining in the shoe.  (This sufficiently large number of decks turns out to be 125.)  But with fewer decks, each individual card dealt from the shoe provides more information to the player, the result being that the player’s strategy is “more composition-dependent” in games with fewer decks.

We can test this theory in a similar fashion, by preventing the playing strategy from varying with number of decks, and seeing if the trend disappears.  To do this, we fix the player’s strategy to be the same (total-dependent) optimal strategy for the game with an infinite number of decks… no matter how many decks we are actually playing with.  That is, we no longer “know” whether we are playing with fewer decks.  The following plot shows the resulting behavior of gain in expected return vs. number of decks.

Figure 4. Gain in expected value (relative to an infinite shoe) vs. number of decks, for the normal game (in blue) and for the "blackjack-less" game played with fixed total-dependent strategy optimized for an infinite shoe (in red).

This is where we left off last week.  We removed the effect of blackjack bonuses and penalties, then also removed the changing composition-dependence of playing strategy… and fewer decks are still better for the player.  What else might be the cause of the trend?

Similar to the bonus of blackjack, perhaps it is the player’s “extra” options to double down and/or split pairs, which can yield larger returns– up to eight times the initial wager– even in a single round.  That is, even if we fix our playing strategy, maybe the advantage of being able to double down or split is greater with fewer decks?

Unfortunately, this also turns out not to be the case.  As before, this theory is easily tested, by simply removing the options to double down or split pairs, and seeing if the trend in expected return disappears.  In fact, let’s go a step further, by considering the simplest, most conservative possible “don’t bust” playing strategy: hit only those hands that we are guaranteed to improve (e.g., hard hands totaling less than 12), and stand on everything else.  Never double down, never split a pair.  The following plot shows the gain in expected return vs. number of decks using this strategy (in red).

Figure 5. Gain in expected value (relative to an infinite shoe) vs. number of decks, for the normal game (in blue) and for the "blackjack-less" game played with a "don't bust" strategy (in red).

Although this further reduces the magnitude of the advantage by a modest amount, the trend persists.  What else is there that varies with number of decks?

The answer is: the probabilities of outcomes of the dealer’s hand.  As mentioned at the outset, the probability of blackjack (which is the same for the player and the dealer) is greater with fewer decks.  But more importantly, the probability that the dealer busts is also greater with fewer decks.  In fact, the key observation is that the game played with the “don’t bust” strategy evaluated above very closely approximates the even simpler game where the player simply bets on whether the dealer busts or not.

So, the claim is that the simplest explanation for why fewer decks are better for the player is that dealer busts are more likely.  The challenge, however, is how to test this theory.  The approach with the previous claims was to remove– or fix– the particular aspect of the game that varied with number of decks.  In this case, we can’t very well remove the dealer… but with a bit of work, we can “fix” him.

To do this, consider the following (still “blackjack-less”) game: the dealer receives only his up card, without a hole card.  The player makes strategy decisions to stand, hit, double down, or split, in optimal composition-dependent manner, as usual.  When it comes time to resolve the dealer’s hand, instead of drawing cards from the shoe, he instead selects from one of ten different biased dice, one for each possible up card, and rolls the appropriate die.  The outcome indicates whether the dealer busts, or stands with total 17 through 21.

In this way, the player has all of the strategy options as in the normal game, with the sole exception of the bonuses and penalties of blackjacks (since we have already quantified that effect).  The only difference is that the probabilities of outcomes of the dealer’s hand are fixed, depending only on the up card, and not on the number of decks in the shoe.  (The natural choice for these fixed probabilities are the probabilities for an infinite shoe.)  If our theory is correct, then if we evaluate the player’s expected return in this modified game, we should see the trend in expected return for the player disappear.

Figure 6. Gain in expected value (relative to an infinite shoe) vs. number of decks, for the normal game (in blue) and for the "blackjack-less" game with fixed probabilities of outcomes of the dealer's hand (in red).

Cool!  The lack of caveats or special assumptions here is worth repeating; after imposing ever greater restrictions on rules and strategy, we have “given back” everything in the normal game except for the bonuses and penalties of blackjacks.  That is, the player is free to make optimal, possibly composition-dependent strategy decisions, including doubling down and splitting pairs, and assuming knowledge of the newly-fixed probabilities of outcomes of the dealer’s hand.

Blackjack and number of decks

I recently made some additional changes to my blackjack combinatorial analysis software, in response to some interesting questions about card-counting systems.  In the process, I encountered some even more interesting– and mostly unrelated– results that I do not fully understand, since they seem to contradict generally accepted ideas about the game.

There are many different variations on the rules of blackjack: the dealer may stand or hit on a soft 17, the player may or may not be able to split pairs more than once, etc.  The game may also be played with different numbers of 52-card decks, from just a single deck to as many as 8 decks (416 cards) shuffled together.  The question here is: how does a player’s achievable expected return vary with the number of decks used… and why does it vary?

The first question is relatively easy to answer; the following plot shows the optimal expected value of a round of blackjack, in percent of initial wager, as a function of the number of decks used.

Expected value (in percent of initial wager) vs. number of decks, using optimal (CDZ-) composition-dependent zero-memory strategy. Rules are S17, DOA, DAS, SPL3 (no RSA), no surrender.

There are a couple of interesting observations.  First, in the game with a single deck, the player actually has the advantage over the house!  Of course, this is why these particular rule variations, despite being the “simplest” and most common, are not found in single-deck casino games.

More importantly, the trend is clear: fewer decks are better for the player.  As the number of decks increases, a player’s expected value decreases asymptotically, approaching the “infinite shoe” expected house edge of about half a percent.

Which brings us to the question motivating this post: why are fewer decks better for the player?  The answer that I see and hear most commonly is similar to that on the above-referenced Wikipedia page:

“All things being equal, using fewer decks decreases the house edge. This mainly reflects an increased likelihood of player blackjack [my emphasis], since if the players draws a ten on their first card, the subsequent probability of drawing an ace is higher with fewer decks. It also reflects a decreased likelihood of blackjack-blackjack push in a game with fewer decks.”

I have never bought this.  Although it is true that player blackjacks are indeed more likely with fewer decks (and pushed blackjacks are less likely), I thought that this was a secondary effect, and the real reason for the trend in overall expected return was something more information-theoretic, so to speak.  We’ll get to this shortly.

But first, it occurred to me that we could verify the claim by considering a modified form of the game: blackjack, but “without the blackjack.”  That is, suppose that we get rid of the 3:2 bonus of player blackjack, as well as the penalty of dealer blackjack, so that an initial ten-ace hand is just another hand totaling 21, no better or worse than, say, 9-7-5.  Of course, the absolute expected return in such a game will be miserable.  But the point is that, if the above claim is correct, then the trend in expected return vs. number of decks should disappear or at least diminish.

No such luck; following is the same plot as above, but with the addition of the red points indicating the optimal expected return for the player in the game without blackjacks.

Expected value vs. number of decks for the normal game (in blue) and for the game without blackjack bonuses or penalties (in red).

At least from this perspective, the blackjack bonuses and penalties do not seem to be the primary reason that “fewer decks are better.”  So what is the reason?

My view has been that the main reason is that, with fewer decks, each individual card dealt from the shoe provides more information to the player than a card dealt from a larger shoe.  That is, with fewer decks, each new card seen causes a larger change in the distribution of card ranks not yet seen (i.e., still in the shoe).

This additional information manifests itself in the player’s strategy being “more composition-dependent” in games with fewer decks.  For example, consider the extreme case of a game played with 1000 decks, essentially an “infinite shoe” as far as we are concerned.  Optimal strategy in this case is “total-dependent,” so that, for example, you should always hit a hard 16 against a dealer 10, no matter how that hard 16 is made up (e.g., 10-6, 9-7, 8-4-4, etc.).  But as the number of decks decreases, the “composition” of those hard 16s begins to matter, to the point where, for example, optimal strategy may be to hit 10-6 against dealer 10, but to stand on 8-4-4.

With the recent additions to the blackjack analysis software, we can measure the extent of this “composition-dependence,” as shown in the following interesting plot.  For each number of decks, we count how many exceptions there are to total-dependent strategy, where each exception consists of a particular player hand composition and dealer up card.

Number of composition-dependent exceptions to total-dependent strategy, vs. number of decks.

This is not quite what I expected to see.  The really complex single-deck strategy, with over 300 composition-dependent special cases to remember, makes sense, as does the eventual decrease to zero composition-dependent exceptions with the absurdly large 125-deck shoe.  But the initial increase in strategy complexity, to a local maximum at 54 decks is, well, weird.  Maybe someone can provide an explanation?

Anyway, things seem to get curiouser still; recall the earlier exercise where we modified the game to eliminate blackjacks, to test the claim that increased likelihood of blackjack was the main reason that fewer decks are better for the player.  To test the speculation that the cause is actually greater opportunity to take advantage of more per-card information– via more composition-dependence in strategy– suppose that instead we modify the playing strategy used, so that no matter how many decks in the shoe, we always use the same total-dependent strategy.  In this way, we do not “know” whether we are playing with fewer decks.  If my speculation is correct, then as before, the trend in expected return should flatten out.

Once again, no such luck.  The following plot compares expected value for the player using optimal composition-dependent strategy (the blue points) with the expected value using a fixed “infinite shoe” total-dependent strategy (the red points).

Expected value vs. number of decks using composition-dependent strategy (in blue) and total-dependent strategy (in red).

At this point, I am stumped.  Hopefully someone may be able to shed some light on this, by either pointing out errors in my calculations, suggesting additional factors that I am missing, or even just suggesting an appropriate way to compare the relative contribution of these two factors.