This is a follow-up to last week’s post, which ended with the question, “Why are fewer decks better for the player in blackjack?” mostly still unresolved. After another week to think about the problem and explore some ideas, I think we can now answer this question with some confidence.
To make this self-contained at the expense of some repetition from last week, let’s begin at the beginning. Assuming a fixed set of rules common in many casinos– the particular choice of rules matters little for our purpose here– the following plot shows the player’s optimal expected return for a single round of blackjack, in percent of initial wager, as a function of the number of decks in the shoe.
Note that the number of decks are indicated on a logarithmic scale, to emphasize the asymptotic behavior of expected return as the number of decks grows large. The question is, why does the player’s expected return increase in games with fewer decks?
The “standard” answer is that fewer decks make blackjacks more likely, as suggested by the relevant Wikipedia page:
“All things being equal, using fewer decks decreases the house edge. This mainly reflects an increased likelihood of player blackjack, since if the players draws a ten on their first card, the subsequent probability of drawing an ace is higher with fewer decks. It also reflects a decreased likelihood of blackjack-blackjack push in a game with fewer decks.”
This is a real (and easily calculated) effect, but only a minor secondary one. As we did last week, we can show this by considering a modified game, where the bonuses and penalties associated with blackjacks are removed, and showing that the trend in expected value vs. number of decks still remains. The following plot shows this trend with the points in red, with the points in blue corresponding to the normal game as in Figure 1.
As you can see, the absolute expected return in a “blackjack-less” game is miserable, but that’s not the point. The point is that the trend remains; even with no blackjacks, fewer decks are still better for the player. To see this more clearly (here and in what follows), instead of plotting absolute expected values, let’s normalize the data relative to an “infinite shoe,” and plot the gain in expected value for a particular number of decks, compared with the expected value for an infinite number of decks (assuming the same rules and playing strategy).
At this point last week, I speculated that the main reason that fewer decks are better is the greater opportunity to vary “composition-dependent” playing strategy, where decisions to stand, hit, etc., may depend not only on the player’s hand total, but on how that total is composed (e.g., is a hard 16 made up of 10-6, or 10-3-3, or 8-4-4, etc.?). With a sufficiently large number of decks, strategy is effectively just “total-dependent,” since the composition of the hand has a negligible effect on the probability distribution of card ranks remaining in the shoe. (This sufficiently large number of decks turns out to be 125.) But with fewer decks, each individual card dealt from the shoe provides more information to the player, the result being that the player’s strategy is “more composition-dependent” in games with fewer decks.
We can test this theory in a similar fashion, by preventing the playing strategy from varying with number of decks, and seeing if the trend disappears. To do this, we fix the player’s strategy to be the same (total-dependent) optimal strategy for the game with an infinite number of decks… no matter how many decks we are actually playing with. That is, we no longer “know” whether we are playing with fewer decks. The following plot shows the resulting behavior of gain in expected return vs. number of decks.
This is where we left off last week. We removed the effect of blackjack bonuses and penalties, then also removed the changing composition-dependence of playing strategy… and fewer decks are still better for the player. What else might be the cause of the trend?
Similar to the bonus of blackjack, perhaps it is the player’s “extra” options to double down and/or split pairs, which can yield larger returns– up to eight times the initial wager– even in a single round. That is, even if we fix our playing strategy, maybe the advantage of being able to double down or split is greater with fewer decks?
Unfortunately, this also turns out not to be the case. As before, this theory is easily tested, by simply removing the options to double down or split pairs, and seeing if the trend in expected return disappears. In fact, let’s go a step further, by considering the simplest, most conservative possible “don’t bust” playing strategy: hit only those hands that we are guaranteed to improve (e.g., hard hands totaling less than 12), and stand on everything else. Never double down, never split a pair. The following plot shows the gain in expected return vs. number of decks using this strategy (in red).
Although this further reduces the magnitude of the advantage by a modest amount, the trend persists. What else is there that varies with number of decks?
The answer is: the probabilities of outcomes of the dealer’s hand. As mentioned at the outset, the probability of blackjack (which is the same for the player and the dealer) is greater with fewer decks. But more importantly, the probability that the dealer busts is also greater with fewer decks. In fact, the key observation is that the game played with the “don’t bust” strategy evaluated above very closely approximates the even simpler game where the player simply bets on whether the dealer busts or not.
So, the claim is that the simplest explanation for why fewer decks are better for the player is that dealer busts are more likely. The challenge, however, is how to test this theory. The approach with the previous claims was to remove– or fix– the particular aspect of the game that varied with number of decks. In this case, we can’t very well remove the dealer… but with a bit of work, we can “fix” him.
To do this, consider the following (still “blackjack-less”) game: the dealer receives only his up card, without a hole card. The player makes strategy decisions to stand, hit, double down, or split, in optimal composition-dependent manner, as usual. When it comes time to resolve the dealer’s hand, instead of drawing cards from the shoe, he instead selects from one of ten different biased dice, one for each possible up card, and rolls the appropriate die. The outcome indicates whether the dealer busts, or stands with total 17 through 21.
In this way, the player has all of the strategy options as in the normal game, with the sole exception of the bonuses and penalties of blackjacks (since we have already quantified that effect). The only difference is that the probabilities of outcomes of the dealer’s hand are fixed, depending only on the up card, and not on the number of decks in the shoe. (The natural choice for these fixed probabilities are the probabilities for an infinite shoe.) If our theory is correct, then if we evaluate the player’s expected return in this modified game, we should see the trend in expected return for the player disappear.
Cool! The lack of caveats or special assumptions here is worth repeating; after imposing ever greater restrictions on rules and strategy, we have “given back” everything in the normal game except for the bonuses and penalties of blackjacks. That is, the player is free to make optimal, possibly composition-dependent strategy decisions, including doubling down and splitting pairs, and assuming knowledge of the newly-fixed probabilities of outcomes of the dealer’s hand.