An optimal stopping card game

Let’s play the following simple game: I will shuffle a standard 52-card deck, and deal one card at a time face up onto the table between us.  You can say “Stop” at any time, at which point I will pay you an amount equal to the fraction of cards dealt so far that are red.  For example, if you stop after seeing a single card, you win 1 if it is red, 0 if it is black, with expected return 1/2.  Of course, you can wait until the entire deck is dealt, also with a guaranteed return of 1/2.

The problem is: can you do better than this?  What is the optimal strategy for playing this game, and what is the corresponding expected return?

(This game is a variant of the following interesting game that I read about this week, which at first glance might seem even simpler, but for which optimal strategy remains an open problem: instead of dealing cards from a deck, I flip a coin repeatedly, and upon stopping I pay you an amount equal to the fraction of tosses that came up heads.)

This entry was posted in Uncategorized. Bookmark the permalink.

10 Responses to An optimal stopping card game

  1. All the black cards could be dealt before a red card is dealt, meaning no optimal strategy’s worst case can beat 1/2. I need to spend some time thinking about strategies.

    • This is true– by “optimal strategy” is meant the strategy that maximizes expected (not worst case) return. For example, one simple strategy is to stop as soon as you have seen one more red card than black. In the worst case that you describe, we *never* see more red than black, and deal the entire deck, yielding 1/2. But “most” of the time, this won’t happen (this is the ballot problem), and we win more than 1/2. (Hint: we can still do better, i.e., this strategy is not optimal.)

      • mendel says:

        Hmmm. If I keep drawing as long as the expected value of the game after the next draw is at least as high as my return right now, then I arrive at r <= b as a "draw" condition (r and b being the number of red and black cards drawn so far). This is equivalent to your "simple strategy", but it doesn't take into account that the expected value can't drop below 0.5.
        In this game, the "gambler's fallacy" is actually true: the conviction that the odds must even out eventually, absolutely.

      • Interesting– I may misunderstand your description of this strategy, but I don’t find that the condition for drawing that maximizes expected return is as simple as r<=b. The earliest counterexample is (r,b)=(3,2); if we stop here, we win 0.6, but drawing (and subsequently playing optimally) yields ~0.616714.

      • mendel says:

        Ah, I should clarify that “expected value of the game after the next draw” mean I only look ahead to the next card draw. In your example, this comes to 23/47 * 4/6 for red + 24/47 * 3/6 for black = 0.58. This is simple to express and to solve algebraically, but not optimal. (I agree with the comparison value of 0.617 you give for the optimal strategy.)

  2. mendel says:

    Numerically, I should stop drawing at one more red card than black if I have less than 2 or more than 23 black cards; try for 3 more red cards than black with 5 to 19 black cards drawn; and go for 2 more reds than black otherwise. This makes the expected return 0.792.

    For comparison, the simple strategy (stop when you have more red than black) has a return of 0.787.

    I have no idea how to analyze my optimal strategy. Based on the symmetry observed, it seems safe to say that I should stop drawing when ((1-b/26)^k+(r/26)^k)^(1/k) > 1 with k=1.26; however, I have no idea how to compute k for different decks of cards.

    • mendel says:

      Matrix of expected values and draw decision: http://ideone.com/8ilm6V
      k=1.26 radius: http://ideone.com/lp6Xme

    • Your code and results agree with mine. (I love the Pascal, btw!) At this point, I don’t think closer analysis of strategy for this game is important. My intent was just to suggest this more readily-solvable game, as a stepping stone to the more interesting Chow-Robbins game, where you flip a coin repeatedly instead of dealing from a deck of cards. This version is much harder, essentially because there is no “base case” for the recursion (the dice game Farkle discussed in an earlier post comes dangerously close to having this problem as well).

      • mendel says:

        Thank you.
        Well, if this is to be stepping-stone, then an analytical solution to it would certainly be beneficial: if you can formulate the strategy in terms of deck size n, then all you need to do is to let n go to infinity to get the Chow-Robbins game: this will both fix the probability for red/black to 50/50 and make it impossible to reach the “base case”.

  3. I think this is an interesting idea that might prove fruitful– but I confess that I am doubtful, for two reasons. First, the card game seems “too different” from the Chow-Robbins game; specifically, the probabilities of each drawn card being red are neither identical nor independent, as they are with the coin-flipping game. Put another way, the Chow-Robbins game is a 2D random walk, while the card game is a random lattice path from (0,0) to (n,n). It’s not clear to me that “taking the limit” removes this fundamental difference in structure of the two games.

    On the other hand, ignoring the card game for now, we can certainly consider “finite” versions of the coin-flipping game, and see what happens when we let n go to infinity. This would allow ever-better (lower bound) approximations of the overall expected value of the game… but optimal strategy is a more difficult question. That is, consider a particular game state (e.g., one still unresolved in the paper linked above: 116 heads, 104 tails), and consider the optimal continue/stop decision for that state when played with a maximum number of flips n. As we increase n, the optimal decision for that game state may change, possibly infinitely often, in which case it’s not clear to me how to “take the limit” to determine the optimal decision in the infinite version. (E.g., it is not known whether the expected value of the game at all states is rational.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s