Efficiency of card counting in blackjack (Part 2)

Introduction

Continuing from last time, recall that card counting systems may be used in two different ways: to vary betting strategy, betting more or less on each round based on an estimate of the current expected return; but possibly also to vary playing strategy, affecting decisions to stand, hit, etc., based on estimates of favorability of the various options.

My objective in this still-introductory post is to focus on the first of these two roles, describing how typical card counting systems work, and how they are used to estimate expected return.  Next time, I will finally get to the new stuff, dealing with playing strategy “indices,” and the concept of “efficiency” as a measure of how close various card counting systems are to being optimal.

The True Count

Given a playing strategy (which we are currently assuming is the fixed, total-dependent basic strategy from the last post), we can compute the exact expected return for a round played with that strategy, as a function of the shoe composition prior to the deal.  A shoe composition is specified by a vector s indicating the number of cards of each rank ace through ten.  For example, a full n-deck shoe is given by

\mathbf{s} = n \mathbf{d} , where \mathbf{d} = (4,4,4,4,4,4,4,4,4,16)

The actual expected return is a complicated non-linear function of s.  A typical card counting system estimates this expected return using the true count (TC), a simpler linear function that a player can compute in his head: basically, the true count is a weighted average of the probabilities of card ranks remaining in the shoe.

To make this precise, let t be a vector of “tags,” or weights, associated with each card rank ace through ten.  Different systems use different tags; for example, the very common Hi-Lo system, first described by Harvey Dubner 50 years ago, uses the tags

\mathbf{t} = (-1,1,1,1,1,1,0,0,0,-1)

Then the true count for a given shoe composition is defined to be

TC_\mathbf{t}(\mathbf{s}) = \frac{-\mathbf{t} \cdot \mathbf{s}}{n(\mathbf{s})}

where, if we temporarily define n(s) to be the total number of cards remaining in the shoe, we see that the true count is just a sum of the card probabilities, weighted by –t.

We need to revise this definition slightly, though, to reflect how the true count is actually computed at the table.  First, the numerator in the formula is the running count (RC); the reader can verify that we can mentally maintain the running count throughout a shoe by adding the tag t_i for each card of rank i that we see dealt, starting with an initial running count (IRC) of -\mathbf{t}\cdot(n\mathbf{d}) for a full n-deck shoe.  (Note that the IRC for the Hi-Lo system above is conveniently equal to zero for any number of decks; more on this later.)

Whenever we need to compute the true count, we simply divide the running count by n(s).  The problem is that it is difficult to keep track of exactly how many cards remain in the shoe.  The solution in most card counting systems is to instead divide by the number of decks remaining (i.e., blocks of 52 cards), estimated with some coarser resolution.  For example, if we estimate the number of decks by rounding to the nearest half-deck, then the true count divisor is

n(\mathbf{s}) = \frac{1}{2} \left \lfloor \frac{\sum s_i}{26} + \frac{1}{2} \right \rfloor

I will use this definition of the true count divisor for the rest of this discussion.  At this point, I think it is important to note two effects of this change.  First, we have relaxed the amount of mental effort required, by approximating the number of cards remaining in the shoe.  But a second effect, perhaps more important but rarely made explicit, is that we have effectively introduced a scale factor, multiplying all of our previously computed true counts by 52.  This is also helpful for the human player, since the resulting true counts range over a wider interval and may be approximated by integers, instead of being confined to small fractional values typically in the interval (-1, 1).

Accuracy of true count estimation of expected return

So how well does this work?  The following figure shows the actual expected return– still using fixed total-dependent basic strategy– vs. the Hi-Lo true count, for each of 4.3 million rounds (gray points) played over 100,000 shoes.  Here again, the color is an overlaid smoothed histogram to show the greater density of points near the origin.

Expected return vs. Hi-Lo true count, using fixed total-dependent basic strategy.

Expected return vs. Hi-Lo true count, using fixed total-dependent basic strategy.

The correlation coefficient is 0.953; for example, if the true count were a perfect predictor of expected return, these points would all lie exactly on a straight line, with a correlation of 1.0.

Can we do better than this?  It turns out that we can… but only if we relax some constraints on the counting systems we are allowed to use.

First, recall that the initial running count (IRC) for the Hi-Lo system is zero.  Such systems are called “balanced”; systems with a non-zero IRC are called “unbalanced.”  It is not clear to me how useful this distinction is, or in particular why unbalanced counts are often advertised as “not requiring a true count conversion,” as if the definition of the true count above depends in any way on whether the IRC happens to be zero or not.

At any rate, if we expand our space of possible counting systems to include unbalanced counts, then the Knockout, or K-O system, which counts sevens as +1, has a slightly higher correlation of 0.955.

However, these two systems– one balanced, one unbalanced– are only optimal among “Level 1” systems, i.e., systems with tags in {-1, 0, +1}.  If we consider “Level 2” systems with tags chosen from {-2, -1, 0, +1, +2}, then the balanced count with the highest correlation with basic strategy expected return is (-2, 2, 2, 2, 2, 2, 1, 0, -1, -2), with a correlation of 0.967.  The best overall Level 2 count is the unbalanced (-2, 1, 2, 2, 2, 2, 1, 0, -1, -2), with a correlation of 0.973.

Betting isn’t everything

There is a reason why you have probably never heard of or seen these two Level 2 counts in the wild.  Note that the optimality of these counting systems depends on the specific rule variations, penetration, and fixed playing strategy assumed so far.  Also, these systems are only “optimal” in the sense of maximizing correlation between true count and actual expected return.  This is still an imperfect measure of “betting efficiency,” which we have yet to define precisely.

But before diving deeper into betting efficiency, we are now in a position to address playing efficiency… where these systems generally suffer.  So far, the playing strategy has been fixed, total-dependent “basic” strategy, depending only on the cards in the player’s current hand (and the dealer’s up card).  For example, basic strategy is to always hit hard 16 against a dealer 10.  But we can improve performance by allowing playing strategy to vary based not only on the cards in the current hand, but also on the current true count.

Next time, I will describe how this is done, including new software for evaluating the resulting improvement in expected return, compared with the best possible improvement from perfect play.

This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Efficiency of card counting in blackjack (Part 2)

  1. Pingback: Efficiency of card counting in blackjack (Part 3) | Possibly Wrong

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s