This is a follow-up to last week’s post, which ended with the question, “Why are fewer decks better for the player in blackjack?” mostly still unresolved. After another week to think about the problem and explore some ideas, I think we can now answer this question with some confidence.

To make this self-contained at the expense of some repetition from last week, let’s begin at the beginning. Assuming a fixed set of rules common in many casinos– the particular choice of rules matters little for our purpose here– the following plot shows the player’s optimal expected return for a single round of blackjack, in percent of initial wager, as a function of the number of decks in the shoe.

Note that the number of decks are indicated on a logarithmic scale, to emphasize the asymptotic behavior of expected return as the number of decks grows large. The question is, why does the player’s expected return increase in games with fewer decks?

The “standard” answer is that fewer decks make blackjacks more likely, as suggested by the relevant Wikipedia page:

“All things being equal, using fewer decks decreases the house edge. This mainly reflects an increased likelihood of player blackjack, since if the players draws a ten on their first card, the subsequent probability of drawing an ace is higher with fewer decks. It also reflects a decreased likelihood of blackjack-blackjack push in a game with fewer decks.”

This is a real (and easily calculated) effect, but only a minor secondary one. As we did last week, we can show this by considering a modified game, where the bonuses and penalties associated with blackjacks are removed, and showing that the trend in expected value vs. number of decks still remains. The following plot shows this trend with the points in red, with the points in blue corresponding to the normal game as in Figure 1.

As you can see, the *absolute* expected return in a “blackjack-less” game is miserable, but that’s not the point. The point is that the trend remains; even with no blackjacks, fewer decks are still better for the player. To see this more clearly (here and in what follows), instead of plotting *absolute* expected values, let’s normalize the data relative to an “infinite shoe,” and plot the *gain* in expected value for a particular number of decks, compared with the expected value for an infinite number of decks (assuming the same rules and playing strategy).

At this point last week, I speculated that the main reason that fewer decks are better is the greater opportunity to vary “composition-dependent” playing strategy, where decisions to stand, hit, etc., may depend not only on the player’s hand total, but on how that total is composed (e.g., is a hard 16 made up of 10-6, or 10-3-3, or 8-4-4, etc.?). With a sufficiently large number of decks, strategy is effectively just “total-dependent,” since the composition of the hand has a negligible effect on the probability distribution of card ranks remaining in the shoe. (This sufficiently large number of decks turns out to be 125.) But with fewer decks, each individual card dealt from the shoe provides more information to the player, the result being that the player’s strategy is “more composition-dependent” in games with fewer decks.

We can test this theory in a similar fashion, by preventing the playing strategy from varying with number of decks, and seeing if the trend disappears. To do this, we fix the player’s strategy to be the same (total-dependent) optimal strategy for the game with an infinite number of decks… no matter how many decks we are *actually* playing with. That is, we no longer “know” whether we are playing with fewer decks. The following plot shows the resulting behavior of gain in expected return vs. number of decks.

This is where we left off last week. We removed the effect of blackjack bonuses and penalties, then *also* removed the changing composition-dependence of playing strategy… and fewer decks are *still* better for the player. What else might be the cause of the trend?

Similar to the bonus of blackjack, perhaps it is the player’s “extra” options to double down and/or split pairs, which can yield larger returns– up to eight times the initial wager– even in a single round. That is, even if we fix our playing strategy, maybe the advantage of being able to double down or split is greater with fewer decks?

Unfortunately, this also turns out not to be the case. As before, this theory is easily tested, by simply removing the options to double down or split pairs, and seeing if the trend in expected return disappears. In fact, let’s go a step further, by considering the simplest, most conservative possible “don’t bust” playing strategy: hit only those hands that we are *guaranteed to improve* (e.g., hard hands totaling less than 12), and stand on everything else. Never double down, never split a pair. The following plot shows the gain in expected return vs. number of decks using this strategy (in red).

Although this further reduces the magnitude of the advantage by a modest amount, the trend persists. What else is there that varies with number of decks?

The answer is: the probabilities of outcomes of the dealer’s hand. As mentioned at the outset, the probability of blackjack (which is the same for the player and the dealer) is greater with fewer decks. But more importantly, **the probability that the dealer busts is also greater with fewer decks**. In fact, the key observation is that the game played with the “don’t bust” strategy evaluated above very closely approximates the even simpler game where the player simply bets on *whether the dealer busts or not*.

So, the claim is that the simplest explanation for why fewer decks are better for the player is that dealer busts are more likely. The challenge, however, is how to test this theory. The approach with the previous claims was to remove– or fix– the particular aspect of the game that varied with number of decks. In this case, we can’t very well *remove* the dealer… but with a bit of work, we *can* “fix” him.

To do this, consider the following (still “blackjack-less”) game: the dealer receives only his up card, without a hole card. The player makes strategy decisions to stand, hit, double down, or split, in optimal composition-dependent manner, as usual. When it comes time to resolve the dealer’s hand, instead of drawing cards from the shoe, he instead selects from one of ten different biased dice, one for each possible up card, and rolls the appropriate die. The outcome indicates whether the dealer busts, or stands with total 17 through 21.

In this way, the player has all of the strategy options as in the normal game, with the sole exception of the bonuses and penalties of blackjacks (since we have already quantified that effect). The only difference is that the probabilities of outcomes of the dealer’s hand are fixed, depending only on the up card, and not on the number of decks in the shoe. (The natural choice for these fixed probabilities are the probabilities for an infinite shoe.) If our theory is correct, then if we evaluate the player’s expected return in this modified game, we should see the trend in expected return for the player disappear.

Cool! The lack of caveats or special assumptions here is worth repeating; after imposing ever greater restrictions on rules and strategy, we have “given back” everything in the normal game except for the bonuses and penalties of blackjacks. That is, the player is free to make optimal, possibly composition-dependent strategy decisions, including doubling down and splitting pairs, and assuming knowledge of the newly-fixed probabilities of outcomes of the dealer’s hand.

It seems to me that each of the above factors accounts for some of the reason. The biggest difference to me is the accuracy expected from composition dependent strategy. It is fairly useless in shoe games because of dilution at the start and irrelevance deep in a shoe. In SD it is of great use off the top but becomes less relevant as the count gets the chance to move. Of course to a BS player there is no difference in information to base a decision on as you get more penetration.

The final observation I don’t believe you made was that the match ups in the final graph in red have a higher player advantage with more decks though very slightly higher. This is a reversal of the normal trend seen. This indicates that the match ups are much more favorable to the player with fewer decks in the unrestricted game but slightly worse if the probability of what only the dealer can draw is fixed. The last part of this would be to see the effect of fewer decks on player busts. After all the player jumps off the cliff first but can vary his play to avoid it. When the player varying his play, as BS does with fewer decks differently than shoe games, does the difference in BS employed in shoe games and SD contribute to the gain.

A similar restriction to your last one on the player only and both the player and dealer would round out your investigation nicely. I am not sure altering the players game with the knowledge of the final restrictions should be allowed (the last sentence of your summary). It puts in a new and irrelevant factor into the data. Without the player being similarly restricted your last simulation doesn’t tell the whole balanced story.

You make a good point that

allof the factors contribute to the effect. What I found interesting was the relative size of those contributions. Consider the difference in SD and infinite deck gain (about 0.7%) as a rough baseline: then from the analysis in the post, we see that player blackjacks account for about 15% of this difference, composition-dependent strategy (which for years I thought was the most important factor) only about 7%, doubling and splitting about 21%… and dealer busts the remaining 57%.You are also correct that the “fairest” setup would be to also fix the player’s strategy, rather than letting him optimize strategy knowing that the dealer’s probabilities are fixed. I did this as well– with a few variations, including “normal” infinite deck strategy, CD strategy varying with each number of decks, and the absurd “only hit < 12" strategy described in the post– and the results were essentially the same. I showed the "optimal player" version in the post, since it makes a

strongerstatement than any of the others; i.e.,despitethe additional freedom of optimality afforded the player, the gain from fewer decks disappears.Hi, I know this post is old, but I am curious about a few thing. Can what I have read so far be summarized as such?

When there are less decks, each time when a card is dealt form the deck, the change in the odds for receiving each card will be greater in magnitude. Since the player goes first and the dealer doesn’t play when the player busts, whenever the dealer plays, it is often left with a deck that has the odds stacked against the dealer.

If this makes sense, can we say that “the player ‘rigs’ the deck more as he plays when there are less decks”

To test this idea, I would purpose a “two dealer” game experiment, where the player would act exactly the same as the dealer, and the rules tuned to remove all bonuses, keeping everything else the same each round and adjust only the number of decks.

Is it possible to run the “two dealer” experiment in your software? If you do test it, what else do you think this experiment can tell us about the game?

Just my thoughts. (I haven’t read the other posts so I apologize if it is the same as someone else’s idea)

“If this makes sense, can we say that “the player ‘rigs’ the deck more as he plays when there are less decks””If I understand you correctly, that you’re asking if the *player’s* draws from the deck affect the probabilities of the *dealer’s* subsequent outcome in a fundamentally different way using, say, basic strategy as opposed to something like “mimic the dealer,” then I think we can say definitively that the answer is no, they don’t. That is, the probabilities of the various outcomes of the dealer’s hand don’t depend *at all* on the player’s choice of hit/stand strategy; this is essentially a corollary of the so-called “extended” true count theorem.

Note that Figure 5 shows the result of “almost” the experiment you are suggesting– except instead of mimicking the dealer, we are considering the more conservative “never bust” strategy. We can actually evaluate the “mimic the dealer” strategy, discounting any bonuses, more directly: both player and dealer have *exactly* the same distribution of possible outcomes, and since the only “asymmetry” in the game is that the dealer wins when *both* hands bust, the house advantage is equal to the probability that both hands bust… which decreases with more decks in a similar way that the probability that “just” the dealer busts does.