This is a follow-up to the previous post, which presented a situation in the second book of the Hunger Games trilogy as a probability problem. The problem may be re-stated as follows: when balls are randomly distributed into initially empty urns, what is the probability that every urn gets at least one ball? (In the book, living past Hunger Games victors are the balls, and the districts/genders of the victors are the urns. The problem asks how likely is it that a male and female from each of 12 districts is represented.)
I found that students really enjoyed tackling this problem, and it can lead in many interesting directions. It makes for a very accessible programming exercise to estimate the probability by simulation. But there is also a nice analytic solution, particularly when each urn is equally likely. In that case, we can use inclusion-exclusion to count distributions of the balls leaving of the urns empty, yielding the probability:
The following plot shows this probability as a function of , the number of balls (i.e., past victors). For , the probability is only about 0.099, or less than 1 in 10.
Finally, this problem is essentially a slight variant of the coupon collector problem, where in the latter we are asked for the expected number of balls needed so that every urn gets at least one ball. There is plenty of interesting literature on the problem, including the case where the probabilities for each urn are not equal.
Which brings us back to the original episode in the book, where those probabilities are indeed not equal; 3 of the 12 districts are described as favorites for producing victors. This makes for another nice simulation exercise, where it is necessary to quantify what “favorite” might mean. Another interesting challenge is to show that this always makes the scenario even less likely. That is, the probability of “hitting” all of the urns is maximized when the probabilities for the urns are equal.