Tracking heap memory use in Windows

Introduction

I recently encountered a problem with a C++ program that was allocating more memory than it should. It wasn’t leaking memory– that is, the program was well-behaved in the sense of eventually releasing every byte of memory that it allocated. But it allocated a lot, enough so that running multiple instances of the program simultaneously on cluster nodes would exceed physical memory capacity.

It should be relatively easy to troubleshoot memory issues like this. However, let’s suppose that our application runs on Windows, compiled in Visual Studio, and for mostly stupid reasons, the built-in Diagnostic Tools are unavailable, and we have neither administrative privilege nor support to install any of the other various existing tools for doing this sort of thing. How can we reinvent this wheel with a minimal amount of effort?

This turned out to be an interesting exercise, with a relatively simple implementation that solved the problem at hand, but also raised some questions that I don’t have good answers for.

Implementation

The result is available here, as well as on GitHub. It’s pretty easy to use: there is no need to modify application source code at all, just compile and link with mem_log.cpp in a debug build. There is only one function exposed in the corresponding header: mem::print_log(std::ostream&), which you can call yourself if desired, but it will be called automatically at program termination, printing a log of heap memory usage to stdout. For example, the following program:

int main()
{
    for (int i = 10; i > 0; --i)
    {
        int *a = new int[i];
        delete[] a;
    }
}

yields the following output:

                    Heap            Freed      Max. Alloc.
==========================================================
Caller: c:\users\username\test_mem\test_mem.cpp(5):main
Blocks:                0               10                1
Bytes:                 0              220               40

The first column indicates the current (and in the usual case, final) state of the heap, as the number of allocated blocks (i.e., calls to operator new) and corresponding total number of bytes. Non-zero values here suggest a memory leak… although more on this shortly.

The second column indicates the number of previously allocated blocks and bytes released by eventual calls to operator delete. Finally, the third column is the one that helped solve this problem, indicating the maximum number of bytes (and corresponding blocks) allocated at any one time.

There is one tricky note: in general there will be multiple records in the log, one for each line of application source code that calls operator new. Such a call may not be explicit as in the above example, but instead perhaps an indirect result of, say, std::vector::push_back, for example. We want to identify this call with the application’s code, not the standard library’s. We do this by walking the stack trace, starting from the globally replaced implementation of operator new, inspecting each corresponding source code filename until the first one is found containing the case-sensitive prefix specified by the preprocessor definition MEM_LOG_PATH (whose default is c:\users). If your application source code lives somewhere else, define MEM_LOG_PATH accordingly.

Limitations and questions

Although this code helped to solve this particular problem, it is still of limited use:

  1. It only works on Windows. I have indicated in source code comments where the Windows-specific stuff is, but have made no effort to implement the corresponding Linux calls to backtrace, etc.
  2. It is not thread-safe.
  3. It is not a leak detector. I only replaced operator new and delete, which is all that we really need to track memory usage in a well-behaved program that doesn’t care about alignment. We would also need to replace the array forms new[] and delete[] (which otherwise call our non-array implementations) to properly check for common undefined-behavior errors like freeing an array allocated via new[] with the non-array delete.
  4. Along the same lines as (3), I didn’t bother with the arguably much harder problem of tracking application memory usage via std::malloc and std::free.

Finally, the special, automagic global replacement behavior of implementing our own operator new and delete presents an interesting puzzle that I’m not sure how to solve, or whether it is even solvable. First, consider that we want to track every memory allocation and release, no matter how early (or late) it occurs during program execution. Since that tracking involves manipulation of our own internal data structure, we need to ensure that our internal object gets fully constructed– and stays constructed– by the time we need it to record any such unpredictably timed allocation.

One easy way to do this is with the construct on first use idiom: we instantiate our internal object as a function static variable, allocated on the heap the first time we need it, and never released. This intentionally “leaks” our object, but (1) so what? And (2) we have to do this, since there is no guarantee– as far as I can tell– that any particular non-trivial static object’s lifetime would persist beyond the last call to the allocation operators we are trying to track.

This works, in the sense that we can confidently track all heap allocations and releases, no matter when they occur… but now consider the final automatic call to mem::print_log(std::cout), which is realized in the destructor of a “dummy” static object whose only purpose is to try to be the last thing that happens in the program. But because of the static initialization order fiasco, there is still a possibility that we might “miss” a subsequent heap allocation or release during some later static object’s destruction. (That is, although we will track it, we won’t see it in the final printed log.) Is there any way to fix this? I think the answer is no, but I’m not sure.

Posted in Uncategorized | 1 Comment

Follow-up: I found two identical packs of Skittles, among 468 packs with a total of 27,740 Skittles

Introduction

This is a follow-up to a post from earlier this year discussing the likelihood of encountering two identical packs of Skittles, that is, two packs having exactly the same number of candies of each flavor. Under some reasonable assumptions, it was estimated that we should expect to have to inspect “only about 400-500 packs” on average until encountering a first duplicate.

So, on 12 January of this year, I started buying boxes of packs of Skittles. This past week, “only” 82 days, 13 boxes, 468 packs, and 27,740 individual Skittles later, I found the following identical 2.17-ounce packs:

Test procedure

I purchased all of the 2.17-ounce packs of Skittles for this experiment from Amazon in boxes of 36 packs each. From 12 January through 4 April, I worked my way through 13 boxes, for a total of 468 packs, at the approximate rate of six packs per day. This was enough to feel like I was making progress each day, but not enough to become annoying or risk clerical errors. For each six-pack recording session, I did the following:

  1. Take a pack from the box, open it, and empty and sort the contents onto a blank sheet of paper.
  2. Take a photo of the contents of the pack.
  3. Record, with pen and paper, the number of Skittles of each color in the pack (more on this later).
  4. Empty the Skittles into a bowl.
  5. Repeat steps 1-4; after six packs, save and review the photos, recording the color counts to file, verifying against the paper record from step 3, and checking for duplication of a previously recorded pack.

The photos captured all of the contents of each pack, including any small flakes and chips of flavored coating that were easy to disregard… but also larger “chunks” of misshapen paste that were often only partially coated or not at all, that required some criteria up front to determine whether or how to count. For this experiment, my threshold for counting a chunk was answering “Yes” to all three of (a) is it greater than half the size of a “normal” Skittle, (b) is it completely coated with a single clearly identifiable flavor color, and (c) is it not gross, that is, would I be willing to eat it? Any “No” answer resulted in recording that pack as containing “uncounted” material, such as the pack shown below.

Example of a Skittles pack recorded with 15 green candies and an “uncounted” chunk.

The entire data set is available here as well as on GitHub. The following figure shows the photos of all 468 packs (the originals are 1024×768 pixels each), with the found pair of identical packs circled in red.

All 468 packs of Skittles, arranged top to bottom, in columns left to right. Each pair of columns corresponds to a box of 36 packs. The two identical packs are circled in red.

But… why?

So, what’s the point? Why bother with nearly three months of effort to collect this data? One easy answer is that I simply found it interesting. But I think a better answer is that this seemed like a great opportunity to demonstrate the predictive power of mathematics. A few months ago, we did some calculations on a cocktail napkin, so to speak, predicting that we should be able to find a pair of identical packs of Skittles with a reasonably– and perhaps surprisingly– small amount of effort. Actually seeing that effort through to the finish line can be a vivid demonstration for students of this predictive power of what might otherwise be viewed as “merely abstract” and not concretely useful mathematics.

(As an aside, I think the fact that this particular concrete application happens to be recreational, or even downright frivolous, is beside the point. For one thing, recreational mathematics is fun. But perhaps more importantly, there are useful, non-recreational, “real-world” applications of the same underlying mathematics. Cryptography is one such example application; this experiment is really just a birthday attack in slightly more complicated form.)

Assumptions and predictions

For completeness, let’s review the approach discussed in the previous post for estimating the number of packs we need to inspect to find a duplicate. We assume that the color of each individual Skittle is independently and uniformly distributed among the d=5 possible flavors (strawberry, orange, lemon, green apple, and grape). We further assume that the total number n of Skittles in a pack is independently distributed with density f(n), where we guessed at f(n) based on similar past studies.

We use generating functions to compute the probability q that two particular randomly selected packs of Skittles would be identical, where

q = \sum_n f(n)^2 p(n,5)

p(n,d) = \frac{1}{d^{2n}}[\frac{x^{2n}}{(n!)^2}](\sum_{k=0}^n (\frac{x^k}{k!})^2)^d

Given this, a reasonable approximation of the expected number of packs we need to inspect until encountering a first duplicate is \sqrt{\pi / (2q)}, or about 400-500 packs depending on our assumption for the pack size density f(n).

Observations

The most common and controversial question asked about Skittles seems to be whether all five flavors are indeed uniformly distributed, or whether some flavors are more common than others. The following figure shows the distribution observed in this sample of 468 packs.

Average number of Skittles of each flavor in a pack. The assumed uniform average of 11.8547 Skittles of each color is shown by the black line.

Somewhat unfortunately, this data set potentially adds fuel to the frequent accusation that the yellow Skittles dominate. However, I leave it to an interested reader to consider and analyze whether this departure from uniformity is significant.

How accurate was our prior assumed distribution f(n) for the total number of Skittles in each pack? The following figure shows the observed distribution from this sample of 468 packs, with the mean of 59.2735 Skittles per pack shown in red.

Histogram of total number of Skittles in each pack. The mean of 59.2735 is shown in red.

Although our prior assumed average of 60 Skittles per pack was reasonable, there is strong evidence against our assumption of independence from one pack to the next, as shown in the following figure. The x-axis indicates the pack number from 1 to 468, and the y-axis indicates the number of Skittles in the pack, either total (in black) or of each individual color. The vertical grid lines show the grouping of 36 packs per box.

Number of Skittles per pack (total and of each color) vs. pack number.

The colored curves at bottom really just indicate the frequency and extent of outliers for the individual flavors; for example, we can see that every color appeared on at least 2 and at most 24 Skittles in every pack. The most interesting aspect of this figure, though, is the consecutive spikes in total number of Skittles shown by the black curve, with the minimum of 45 Skittles in pack #291 immediately followed by the maximum of 73 Skittles in pack #292. (See this past analysis of a single box of 36 packs that shows similar behavior.) This suggests that the dispenser that fills each pack targets an amortized rate of weight or perhaps volume, got jammed somehow resulting in an underfilled pack, and in getting “unjammed” overfilled the subsequent pack.

This is admittedly just speculation; note, for example, that the 36 packs in each box are relatively free to shift around, and I made only a modest effort to pull packs from each box in a consistent “top to bottom, front to back” order as I recorded them. So although each group of 36 packs in this data set definitely come from the same box, the order of packs within each group of 36 does not necessarily correspond to the order in which the packs were filled at the factory.

At any rate, if the objective of this experiment were to obtain a representative “truly random” sample of packs of Skittles, then the above behavior suggests that buying these 36-pack boxes in bulk is probably not recommended.

Stopping rule

Finally, one additional caveat: fortunately the primary objective of this experiment was not to obtain a “truly random” sample, but only to confirm the predicted “ease” with which we could find a pair of identical packs of Skittles. However, suppose that we did want to use this data set as a “truly random” sample… and further suppose that we could eliminate the practical imperfections suggested above, so that each pack was indeed a theoretically perfect, independent random sample.

Then even in this clean room thought experiment, we still have a problem: by stopping our sampling procedure upon encountering a duplicate, we have biased the distribution of possible resulting sample data sets! This can perhaps be most clearly seen with a simpler setup that allows an analytical solution: suppose that each pack contains just n=2 Skittles, and each individual Skittle is independently equally likely to be one of just d=2 possible colors, red or green. If we collect any fixed number of sample packs, then we should expect to observe an “all-red” pack with two red Skittles exactly 1/4 of the time. But if we instead collect sample packs until we observe a first duplicate, and then count the fraction that are all red, the expected value of this fraction is slightly less than 1/4 (181/768, to be exact). That is, by stopping with a duplicate, we are less likely to even get a chance to observe the more rare all-red (or all-green) packs.

It’s an interesting problem to quantify the extent of this effect (which I suspect is vanishingly small) with actual packs of Skittles, where the numbers of candies are larger, and the probabilities of those “extreme” compositions such as all reds is so small as to be effectively zero.

Posted in Uncategorized | 49 Comments

Printing a single-elimination tournament bracket

How should the teams be arranged in a “seeded” single-elimination tournament bracket?

This question is motivated by the NCAA men’s basketball tournament, the first two rounds of which are now in the books. There are 64 teams in the tournament, divided into four regions each with 16 teams seeded #1 through #16. In typical displays of these regional brackets, such as those on the NCAA and ESPN web sites, the 16 seeds in a region are arranged as shown at left in the figure below, where in this example the favored seed always wins, advancing to the right with each round of the tournament:

Typical layout of seeded 16-team single-elimination tournament bracket.

The basic idea behind this arrangement is that all of the best-seeded teams should be able to advance together for as many rounds as possible, with more advantage granted to higher seeds. More precisely, we want a tournament bracket with r rounds to be fair, defined recursively as follows: the match-ups between the n=2^r teams must be arranged with seed j playing seed n+1-j in the current round, so that assuming the favored seed wins each game, seeds 1 \ldots 2^{r-1} can all advance to the next round, and the remaining bracket is (recursively) also fair.

The above bracket layout has a few visually appealing properties:

  1. In the first round, the favored seed is always the “visiting” team, placed above the lower-seeded opponent.
  2. The #1 seed is always at the top of the bracket in each round.
  3. The #2 seed is (almost, except for property 1) as far from the #1 seed as possible, so that the two top-seeded teams clearly advance “toward” each other in each round.

But this isn’t the only possible fair bracket layout. It’s an interesting problem to count them all. (Hint: even constrained by all three properties above, there are still four different possible layouts, of which the above is just one example.)

For example, consider the following layout, which motivated this post:

Another fair bracket layout generated by several simple algorithms.

This layout is the one that I used to store the history of all past NCAA tournaments since 1985 in the current 64-team format. The layout has the first two properties above, but not the third. I chose it because it’s easy to implement; the following simple Python function generates the ordering of seeds, effectively starting from the single #1 seed as the champion on the right of the bracket and moving left through previous rounds, maintaining the fairness condition as we go:

def bracket_seeds(num_teams):
    seeds = [1]
    while len(seeds) < num_teams:
        games = zip(seeds, (2 * len(seeds) + 1 - seed for seed in seeds))
        seeds = [team for game in games for team in game]
    return seeds

This layout has another “nice” structural interpretation as well: if we temporarily re-index the seeds to start with zero instead of one, then the positions of each seed in the bracket are given by the bit-reversed Gray codes of the seeds.

So, back to the original puzzle: what similarly “nice” algorithm yields the more common layout used by the NCAA, ESPN, etc.? More generally, what other “nice” algorithms yield yet other alternative fair layouts? For example, PrintYourBrackets.com seems to use the same approach that I did, generating the same layout for brackets with 4, 8, and 16 teams… but the 32-team bracket is different.

Posted in Uncategorized | Leave a comment

The answer is always “None of the above (or below)”

Consider this multiple-choice problem:

Which of the following answers is the correct answer to this question?

  1. None of the below
  2. All of the below
  3. All of the above
  4. One of the above
  5. None of the above
  6. None of the above

This is a variation on a common logic problem (see here, for example). The list of possible answers is slightly different here, but the solution is essentially the same: it’s “None of the above”… but I’ll leave it as an exercise to work out which “None of the above” is correct.

As with the similar “whodunit” logic puzzle discussed recently, I think it’s a nice programming exercise to not only solve this particular instance of the puzzle, but to enumerate all possible logic puzzles of this type. That is, using answers of the form “(All | None | One) of the (above | below),” construct all multiple-choice problems that have a unique correct answer.

It turns out there are many different possible puzzles… but they are all one of just two basic forms: the correct answer must be either

  • “None of the (above or below),” or else
  • A vacuously true “(All or None) of the above” appearing first in the list, or “(All or None) of the below” appearing last in the list.

There is one interesting wrinkle to consider: is it necessary to assume that the problem actually has a unique correct answer? Although this does not affect the two possible forms of the solution described above, requiring this assumption does expand slightly the set of valid puzzles.

Posted in Uncategorized | Leave a comment

Identical packs of Skittles

Introduction

“No two rainbows are the same. Neither are two packs of Skittles. Enjoy an odd mix.” – Skittles label

Analyzing packs of Skittles (or sometimes M&Ms) seems to be a very common exercise in introductory statistics. How many candies are in each pack? How many candies of each individual color? Are the colors uniformly distributed?

The motivation for this post is to ask some questions raised by the claim in the above quote:

  1. How many different possible packs of Skittles are there? Here we consider two packs of Skittles to be distinguishable only by the number of candies of each color.
  2. What is the probability that two randomly purchased packs of Skittles are the same?
  3. What is the expected number of packs that must be purchased until first encountering a duplicate?

Number of possible packs

If there are n candies in a 2.17-ounce pack, and each candy is one of d=5 colors, then the number of possible distinguishable packs is

{n+d-1 \choose d-1}

Interestingly, a Skittles commercial from the 1990s suggests that there are 371,292 different possible packs (about 27 seconds into the video). It’s not clear whether this number is based on any mathematics or just marketing… but it’s actually reasonably close to the value 367,290 that we get with the above formula assuming that there are n=52 candies in a pack.

However, the total number of candies varies from pack to pack. Most studies suggest an average of about 60 candies per pack– with 635,376 possible packs of exactly that size– and this study includes a couple of outlier packs with as few as 42 and as many as 85 candies. We can sum the binomial coefficients over this range of possible pack sizes, to get 42,578,514 possible distinguishable packs… but fortunately, we can more conservatively allow for all pack sizes from empty up to, say, 100 candies, and still remain within an order of magnitude, and confidently assert that there are at most 100 million different possible packs of Skittles.

Total number of possible packs of Skittles, from “empty” to given maximum number of candies per pack.

Mars Wrigley Confectionery advertises 200 million individual candies produced every day; if we make the extremely conservative assumption that these are distributed among equal numbers of packs of various sizes (i.e., for very 2.17-ounce pack, there is, for example, a corresponding 3.4-pound party pack), so that the 2.17-ounce packs make up only about 1.6% of total production, this works out to approximately 50,000 packs per day. By the pigeonhole principle, after only about 1800 days of production, there must be two identical packs of Skittles out there somewhere… but identical packs are likely much more common than this worst-case analysis suggests, as we’ll see shortly.

Probability of identical packs

The second question requires a bit more work. Let’s temporarily assume that there are always exactly n=60 candies in every pack, so that there are {64 \choose 4}, or 635,376 possible packs. We might naively assume that the probability that two randomly purchased packs are identical is 1/635376. But this is only true if every possible pack is equally likely, e.g., a pack of 60 all-red candies is just as likely as a pack with 12 candies of each color. I can’t envision a method of production and packaging that would yield this uniform distribution.

Instead, let’s assume that each individual candy’s color is independently and identically distributed, with each of the d=5 colors equally likely. For example, imagine an equally large number of candies of each color mixed together in one giant urn (I’m a mathematician, so I have to call it an urn), and dispensed roughly 60 at a time into each pack.

This tracks very well with the actual observed distribution of colors of candies within and across packs. Some packs have more yellows, some have more reds, etc., but there is almost never a color missing, and almost never exactly the same number of every color… but with more and more observed packs, the overall distribution of colors is indeed very nearly uniform. In other words, although it can be unintuitive, giving the impression of “designed” non-uniformity of the distribution of colors, we should expect variability using this uniform model, even for a seemingly “large” sample from, say, a party-size bag of over a thousand candies.

So, given two randomly purchased packs each with n candies of d equally likely colors, what is the probability p(n,d) that they are identical? We can compute this probability exactly as the coefficient of an appropriate generating function:

p(n,d) = \frac{1}{d^{2n}}[\frac{x^{2n}}{(n!)^2}](\sum_{k=0}^n (\frac{x^k}{k!})^2)^d

For completeness, following is example C++ source code for computing these probabilities using arbitrary-precision rational arithmetic:

#include "math_Rational.h"
#include 
<map>
#include <iostream>
using namespace math;

// Map powers of x to coefficients.
typedef std::map<int, Rational> Poly;

// Return f(x)*g(x) mod x^m.
Poly product_mod(Poly f, Poly g, int m)
{
    Poly h;
    for (int k = 0; k < m; ++k)
    {
        for (int j = 0; j <= k; ++j)
        {
            h[k] += f[j] * g[k - j];
        }
    }
    return h;
}

Rational p(int n, int d)
{
    // Compute g(x) = 1 + ... + (x^n/n!)^2.
    Poly g; g[0] = 1;
    Rational factorial = 1, d_2n = 1;
    for (int k = 1; k <= n; ++k)
    {
        factorial *= k;
        d_2n *= (d * d);
        g[2 * k] = 1 / (factorial * factorial);
    }

    // Compute f(x) = g(x)^d mod x^(2n+1).
    Poly f; f[0] = 1;
    for (int k = 0; k < d; ++k)
    {
        f = product_mod(f, g, 2 * n + 1);
    }

    // Return [x^(2n)]f(x) * (n!)^2 / d^(2n).
    return f[2 * n] * (factorial * factorial) / d_2n;
}

int main()
{
    std::cout << p(60, 5) << std::endl;
}

This yields, for example, p(60,5) equals approximately 1/10254. In other words, buy two 60-candy packs, and observe whether they are identical. Then buy another pair of 60-candy packs, etc., until you find an identical pair. You should expect to buy about 10,254 pairs of packs on average.

(Aside: This is effectively the same problem as discussed here a couple of years ago.)

However, we must again account for the variability in total number n of candies per pack. That is, p(n,5) is the probability that two packs are identical, conditioned on them both containing exactly n candies. In question (1), we only needed the maximum range of possible values of n, but here we need the actual probability density f(n), so that we can integrate f(n)^2 p(n,5) over all possible n.

Fortunately, the variance of the approximately normally distributed pack size (we can be reasonably confident in the mean of 60) doesn’t change the answer much: the probability that two randomly purchased packs– of possibly different sizes– will be identical is somewhere between about 1/100,000 and 1/200,000.

Expected number of packs until first duplicate

Finally, question (3) is essentially a messy birthday problem: instead of insisting that a particular pair of Skittles packs be identical, how many packs must we buy on average for some pair to be identical? The numbers from question (2) in the hundred thousands may seem large, but a back-of-the-envelope square-root estimate of “only” about 400-500 packs to find a duplicate was, to me, surprisingly small. This is confirmed by simulation: let’s assume that every pack of Skittles independently contains a (100,0.6)-binomially distributed number of candies– for a mean of 60 candies per pack– with each individual candy’s color independently uniformly distributed. We repeatedly buy packs until we first encounter one that is identical to one we have previously purchased. The figure below shows the distribution of the number of packs we need to buy, based on one million Monte Carlo simulations, where the mean of 524 packs is shown in red.

Distribution of number of packs needed until encountering a first duplicate.

A box of 36 packs of Skittles costs about 21 dollars… so an experiment to search for two identical packs should only cost about $300… on average.

[Edit: See this follow-up post describing the results of this experiment.]

Posted in Uncategorized | 9 Comments

Random drug testing in the NFL

Introduction

“This is supposed to be a random system. It doesn’t feel very random.” – Eric Reid, as quoted on Twitter

Last week, Eric Reid was selected to take his fifth random drug test in eight weeks with the Carolina Panthers. That might seem like a lot. It might even seem like Reid is perhaps the target of extra scrutiny by the NFL, particularly given his social activism, viewed as controversial by some, and his involvement in a current collusion lawsuit against the league.

So is the drug testing really random, or is Reid justified in his complaint? From the NFL Players Association Policy on Performance-Enhancing Substances:

“Each week during the preseason and regular season, ten (10) Players on every Club will be tested. By means of a computer program, the Independent Administrator will randomly select the Players to be tested from the Club’s active roster, practice squad list, and reserve list who are not otherwise subject to ongoing reasonable cause testing for performance-enhancing substances.”

As will be shown shortly, this is a pretty straightforward example of our very human habit of perceiving patterns where only randomness exists. But there is an interesting mathematical problem buried here as well, challenging enough that I can only provide an approximate solution.

Let’s make the setup more precise: suppose that for each of n=8 weeks we select, with replacement, a random subset of s=10 of t=72 players on a team to take a drug test. (Reid only signed with the Panthers eight weeks ago, and I am assuming that the 72 players comprise 53 active, 10 practice, and 9 reserve.) There are three reasonable questions to ask:

  1. What is the probability that a particular player (e.g., Eric Reid) will be selected for testing m=5 or more times over this time period?
  2. What is the probability that at least one player on the team (i.e., not necessarily Reid) will be selected for testing m=5 or more times?
  3. What is the probability that at least one player in the 32-team league will be selected for testing m=5 or more times?

Question 1: P(Eric Reid is selected 5 or more times)

The first question is easy to answer, and is unfortunately the only question asked in most of the popular press. The probability of being selected in any single week is s/t, and so the probability of being selected at least m times in n weeks is

q = \sum_{k=m}^n {n \choose k} (\frac{s}{t})^k (1-\frac{s}{t})^{n-k}

which equals approximately 0.002, or only slightly more than one chance in 500.

But if there is a moral to this story, it’s this: You will almost certainly not win the lottery… but almost certainly someone will. That is, the second (or really the third) question is the right one to ask: what is the probability that some player will be selected so many times?

Question 2: P(some Carolina player is selected 5 or more times)

This is the hard problem that motivated this post. There is some similarity to the “Double Dixie Cup” version of the coupon collector problem with group drawings, where the players are coupons, but instead of requiring at least m copies of each coupon in n drawings, here we ask for at least m copies of at least one coupon (or the complementary equivalent, for at most m-1 copies of each coupon).

If we define the generating function

g(x_1,x_2,...,x_n) = \prod_{k=1}^n (1+x_k)

and h(\cdot) to be the expansion of g(\cdot) with all terms removed where the sum of exponents is at least m, then the desired probability may be expressed as

p = 1-\frac{[\prod_{k=1}^n x_k^s] h(\cdot)^t}{[\prod_{k=1}^n x_k^s] g(\cdot)^t}

where the denominator is simply {t \choose s}^n. But unfortunately I don’t see a computationally feasible way to evaluate the coefficient in the numerator. Fortunately, we can get a good lower bound on p using inclusion-exclusion and Bonferroni’s inequality:

p \geq t q - \frac{{t \choose 2}}{{t \choose s}^n}\sum_{i=m}^n \sum_{j=m}^n \sum_{k=\max(0,i+j-n)}^{\min(i,j)} f(i,j,k)

f(i,j,k) = {n \choose k}{{n-k} \choose {i-k}} {{n-i} \choose {j-k}} {{t-2} \choose {s-2}}^k {{t-2} \choose {s-1}}^{i+j-2k} {{t-2} \choose s}^{n-i-j+k}

yielding a probability p \geq 0.136 that some Carolina player would be selected 5 or more times over 8 weeks.

Question 3: P(some NFL player is selected 5 or more times)

Finally, while this is happening, the other 31 teams in the league are subjecting their players to the same random drug testing procedure. The probability that some player on some team will experience 5 or more random drug tests over a span of 8 weeks is

1-(1-p)^{32} \geq 0.99

In other words, it is a near certainty that some player in the league would experience the number of random tests that Reid has. Indeed, by linearity of expectation, we should expect an average of 32 t q \approx 4.6 players to find themselves in a similar situation over a similar time period. How many players actually did experience multiple tests over the last couple of months would be interesting and useful data to add to the discussion.

Posted in Uncategorized | 3 Comments

Generating Mini-Crosswords

Introduction

The New York Times publishes “mini-crosswords,” which are crossword puzzles on a relatively small grid (usually 5×5), without any black squares, so that every row and column of the grid must spell a word. The figure below shows an example of a solution to such a puzzle.

How hard is it to create these puzzles? This post is motivated by a recent College Mathematics Journal article (see Reference (1) below) that considers this question, and describes an approach using the Metropolis-Hastings algorithm to randomly sample instances of puzzles.

But instead of just randomly sampling one puzzle at a time, can we actually enumerate all possible puzzles? In particular, my idea was to reduce the problem of finding a crossword puzzle solution to that of finding a (generalized) exact cover with appropriately crafted constraints. This would be handy, because we already have code for solving exact cover problems, using Knuth’s Dancing Links (DLX) algorithm (see here and here for similar past exercises).

Crossword as an exact cover

To state the problem more precisely: given a positive integer n indicating the size of the grid (n=5 in the above example), and a dictionary W of m words each of length n over alphabet \Sigma, we must construct an exact cover problem whose solution corresponds to a placement of letters in all n^2 grid positions such that each row and column spells a word in W.

To do this, we construct a zero-one matrix with 2mn rows, each corresponding to placing one of the m dictionary words either “across” (in one of the n rows of the puzzle) or “down” (in one of the n columns). A solution will consist of a subset of 2n rows of the matrix: n “across” words, one in each row of the puzzle, and n “down” words, one in each column, each pair of which intersect in the appropriate common letter.

To represent the constraints, we initially need 2n^2\cdot|\Sigma| columns in our matrix (where |\Sigma|=26), each indexed by a tuple (puzzle row i, puzzle column j, alphabet letter c, across or down). For a given matrix row– representing placement of a word with letters (w_1, w_2, ..., w_n) in a particular location and orientation in the puzzle grid– we set to one those n\cdot|\Sigma| columns corresponding to the n different (i,j) grid locations where the word will be placed in the puzzle… where for each alphabet letter c, we set the “across” column to one if and only if either

  • the word is “across” and c matches the letter of the word in this location (i.e., c=w_j), or
  • the word is “down” and c does not match the letter of the word in this location (i.e., c \neq w_i).

If neither of these conditions is satisfied, we instead set the “down” column to one. The following Python code produces the number and list of (row, column) pairs of the resulting sparse matrix.

letters = 'abcdefghijklmnopqrstuvwxyz'
m = len(words)
n = len(words[0])
print(m * n * 2 * (n * 26), file=file)
b = [True, False]
for row, ((w, word), k, across) in enumerate(
            product(enumerate(words), range(n), b)):
    for col, (i, j, letter, horiz) in enumerate(
            product(range(n), range(n), letters, b)):
        if ((i if across else j) == k and
            (word[j if across else i] == letter) == (across == horiz)):
            print(row, col, file=file)

However, we’re not quite finished. Although the desired end result is a crossword with distinct words in each row and column, such as the 6×6 solution shown in (a) below, as Howard observes in the CMJ article, there are many valid solutions that use the same word more than once, including the extreme cases of symmetricword squares” such as the one shown in (b) below.

(a) 6×6 mini-crossword, where the word in each row and column is unique. (b) 6×6 “word square,” a symmetric grid with each row and corresonding column containing the same word.

We can eliminate this duplication by adding m “optional” columns to the zero-one matrix, one for each word in the dictionary, and solve the resulting generalized exact cover problem, so that each word may be used at most once in a solution.

Results

All of the source code is available here, as well as on GitHub. My initial test used the 4×4 case discussed in Howard’s paper, with his dictionary of 1826 words. He describes a process for estimating the total number of possible puzzles by repeated sampling using the Markov chain Monte Carlo approach: “We estimate that there are approximately
73,000–74,000 distinct puzzles each with no repeated words.” This is pretty accurate; it turns out that there are exactly 74,339 (each contributing two symmetric pairs of solutions to the generalized exact cover problem, for a total of 148,678 solutions).

References:

  1. Howard, C. Douglas, It’s Puzzling, College Mathematics Journal, 49(4) September 2018, p. 242-249 [DOI]
  2. Knuth, D., Dancing Links, Millenial Perspectives in Computer Science, 2000, p. 187-214 (arXiv)

 

Posted in Uncategorized | 3 Comments