An optimal stopping card game

Let’s play the following simple game: I will shuffle a standard 52-card deck, and deal one card at a time face up onto the table between us.  You can say “Stop” at any time, at which point I will pay you an amount equal to the fraction of cards dealt so far that are red.  For example, if you stop after seeing a single card, you win 1 if it is red, 0 if it is black, with expected return 1/2.  Of course, you can wait until the entire deck is dealt, also with a guaranteed return of 1/2.

The problem is: can you do better than this?  What is the optimal strategy for playing this game, and what is the corresponding expected return?

(This game is a variant of the following interesting game that I read about this week, which at first glance might seem even simpler, but for which optimal strategy remains an open problem: instead of dealing cards from a deck, I flip a coin repeatedly, and upon stopping I pay you an amount equal to the fraction of tosses that came up heads.)

This entry was posted in Uncategorized. Bookmark the permalink.

10 Responses to An optimal stopping card game

1. All the black cards could be dealt before a red card is dealt, meaning no optimal strategy’s worst case can beat 1/2. I need to spend some time thinking about strategies.

• This is true– by “optimal strategy” is meant the strategy that maximizes expected (not worst case) return. For example, one simple strategy is to stop as soon as you have seen one more red card than black. In the worst case that you describe, we *never* see more red than black, and deal the entire deck, yielding 1/2. But “most” of the time, this won’t happen (this is the ballot problem), and we win more than 1/2. (Hint: we can still do better, i.e., this strategy is not optimal.)

• mendel says:

Hmmm. If I keep drawing as long as the expected value of the game after the next draw is at least as high as my return right now, then I arrive at r <= b as a "draw" condition (r and b being the number of red and black cards drawn so far). This is equivalent to your "simple strategy", but it doesn't take into account that the expected value can't drop below 0.5.
In this game, the "gambler's fallacy" is actually true: the conviction that the odds must even out eventually, absolutely.

• Interesting– I may misunderstand your description of this strategy, but I don’t find that the condition for drawing that maximizes expected return is as simple as r<=b. The earliest counterexample is (r,b)=(3,2); if we stop here, we win 0.6, but drawing (and subsequently playing optimally) yields ~0.616714.

• mendel says:

Ah, I should clarify that “expected value of the game after the next draw” mean I only look ahead to the next card draw. In your example, this comes to 23/47 * 4/6 for red + 24/47 * 3/6 for black = 0.58. This is simple to express and to solve algebraically, but not optimal. (I agree with the comparison value of 0.617 you give for the optimal strategy.)

2. mendel says:

Numerically, I should stop drawing at one more red card than black if I have less than 2 or more than 23 black cards; try for 3 more red cards than black with 5 to 19 black cards drawn; and go for 2 more reds than black otherwise. This makes the expected return 0.792.

For comparison, the simple strategy (stop when you have more red than black) has a return of 0.787.

I have no idea how to analyze my optimal strategy. Based on the symmetry observed, it seems safe to say that I should stop drawing when ((1-b/26)^k+(r/26)^k)^(1/k) > 1 with k=1.26; however, I have no idea how to compute k for different decks of cards.

• mendel says:

Matrix of expected values and draw decision: http://ideone.com/8ilm6V