That’s an interesting question, that I don’t have any good answers to. The difficulty is that combinatorial analysis of the type described here is effectively assuming a probability distribution of possible *arrangements* of the *entire* remaining face-down shoe– namely, that that distribution is uniform. Accounting for shuffle tracking would require specifying not just the (higher) probability of the *next* card being a particular value, but the non-uniformity of the entire distribution of possible arrangements of all remaining face-down cards. Even the “expressive power” for how one would specify that distribution is an interesting and challenging problem.

]]>First off thank you for posting this information! I have followed through your blackjack posts (the best I can) and played around with the blackjack application, very fascinating insights and work!

My question maybe beyond your work (or my understanding!) but I’m hoping it would be something fairly easy for you to answer or maybe you can point me the right direction. When using the optimal composition-dependent strategy, the strategy varies based on the cards that have been played. The assumption is that the remaining cards in the deck have an equal probability of being the next card dealt.

If you were able to shuffle track a deck with an estimated probability of success how would that affect the EV and could that be integrated in your optimal composition-dependent strategy or algorithm? For example, you have 10-4, the deck has qty: 5, 10 value cards, qty: 5, 4, qty: 2, A and qty: 3, 9 left. So probably of a 10 would be 5/20 (25%) but with your shuffle tracking skills you estimate that there is a 75% probably of that being a car of value 10.

]]>@Jules, sure, no problem! Citation is preferred and appreciated, but the data and images are unlicensed, so you’re free to use them as you like. I’m happy to hear about another educational use of this experiment!

]]>Although 8.6 minutes seems a bit long, after revisiting this I think 7.8 minutes is also a bit short :). That is, the large variance of customer waiting time means that we need to simulate a lot of customers to accurately estimate the mean, and my initial implementation in MATLAB was slow enough that I only sampled a few “months” worth of customers.

So I re-wrote the simulation in Python, and cranked up the simulation time (about 2.4 million customers), so that a more accurate estimate is around 8.1-8.2 minutes. See code here if you’d like to compare; lines 14-15 of test_queue.py pick the shortest queue, breaking ties by lowest index.

]]>Maybe that makes a difference for keeping elements in memory or cache. You don’t have to potentially reload elements in the finished zone.

]]>The Go, CPython, and OpenJSK all implement it the backwards way:

https://golang.org/src/math/rand/rand.go?s=11509:11549#L235

https://github.com/python/cpython/blob/3.9/Lib/random.py#L348

https://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/Collections.java#l517

I can’t think of any reason not to do it forwards, so maybe that’s how I’ll do it from now on.

]]>