Identical packs of Skittles

Introduction

“No two rainbows are the same. Neither are two packs of Skittles. Enjoy an odd mix.” – Skittles label

Analyzing packs of Skittles (or sometimes M&Ms) seems to be a very common exercise in introductory statistics. How many candies are in each pack? How many candies of each individual color? Are the colors uniformly distributed?

The motivation for this post is to ask some questions raised by the claim in the above quote:

  1. How many different possible packs of Skittles are there? Here we consider two packs of Skittles to be distinguishable only by the number of candies of each color.
  2. What is the probability that two randomly purchased packs of Skittles are the same?
  3. What is the expected number of packs that must be purchased until first encountering a duplicate?

Number of possible packs

If there are n candies in a 2.17-ounce pack, and each candy is one of d=5 colors, then the number of possible distinguishable packs is

{n+d-1 \choose d-1}

Interestingly, a Skittles commercial from the 1990s suggests that there are 371,292 different possible packs (about 27 seconds into the video). It’s not clear whether this number is based on any mathematics or just marketing… but it’s actually reasonably close to the value 367,290 that we get with the above formula assuming that there are n=52 candies in a pack.

However, the total number of candies varies from pack to pack. Most studies suggest an average of about 60 candies per pack– with 635,376 possible packs of exactly that size– and this study includes a couple of outlier packs with as few as 42 and as many as 85 candies. We can sum the binomial coefficients over this range of possible pack sizes, to get 42,578,514 possible distinguishable packs… but fortunately, we can more conservatively allow for all pack sizes from empty up to, say, 100 candies, and still remain within an order of magnitude, and confidently assert that there are at most 100 million different possible packs of Skittles.

Total number of possible packs of Skittles, from “empty” to given maximum number of candies per pack.

Mars Wrigley Confectionery advertises 200 million individual candies produced every day; if we make the extremely conservative assumption that these are distributed among equal numbers of packs of various sizes (i.e., for very 2.17-ounce pack, there is, for example, a corresponding 3.4-pound party pack), so that the 2.17-ounce packs make up only about 1.6% of total production, this works out to approximately 50,000 packs per day. By the pigeonhole principle, after only about 1800 days of production, there must be two identical packs of Skittles out there somewhere… but identical packs are likely much more common than this worst-case analysis suggests, as we’ll see shortly.

Probability of identical packs

The second question requires a bit more work. Let’s temporarily assume that there are always exactly n=60 candies in every pack, so that there are {64 \choose 4}, or 635,376 possible packs. We might naively assume that the probability that two randomly purchased packs are identical is 1/635376. But this is only true if every possible pack is equally likely, e.g., a pack of 60 all-red candies is just as likely as a pack with 12 candies of each color. I can’t envision a method of production and packaging that would yield this uniform distribution.

Instead, let’s assume that each individual candy’s color is independently and identically distributed, with each of the d=5 colors equally likely. For example, imagine an equally large number of candies of each color mixed together in one giant urn (I’m a mathematician, so I have to call it an urn), and dispensed roughly 60 at a time into each pack.

This tracks very well with the actual observed distribution of colors of candies within and across packs. Some packs have more yellows, some have more reds, etc., but there is almost never a color missing, and almost never exactly the same number of every color… but with more and more observed packs, the overall distribution of colors is indeed very nearly uniform. In other words, although it can be unintuitive, giving the impression of “designed” non-uniformity of the distribution of colors, we should expect variability using this uniform model, even for a seemingly “large” sample from, say, a party-size bag of over a thousand candies.

So, given two randomly purchased packs each with n candies of d equally likely colors, what is the probability p(n,d) that they are identical? We can compute this probability exactly as the coefficient of an appropriate generating function:

p(n,d) = \frac{1}{d^{2n}}[\frac{x^{2n}}{(n!)^2}](\sum_{k=0}^n (\frac{x^k}{k!})^2)^d

For completeness, following is example C++ source code for computing these probabilities using arbitrary-precision rational arithmetic:

#include "math_Rational.h"
#include 
<map>
#include <iostream>
using namespace math;

// Map powers of x to coefficients.
typedef std::map<int, Rational> Poly;

// Return f(x)*g(x) mod x^m.
Poly product_mod(Poly f, Poly g, int m)
{
    Poly h;
    for (int k = 0; k < m; ++k)
    {
        for (int j = 0; j <= k; ++j)
        {
            h[k] += f[j] * g[k - j];
        }
    }
    return h;
}

Rational p(int n, int d)
{
    // Compute g(x) = 1 + ... + (x^n/n!)^2.
    Poly g; g[0] = 1;
    Rational factorial = 1, d_2n = 1;
    for (int k = 1; k <= n; ++k)
    {
        factorial *= k;
        d_2n *= (d * d);
        g[2 * k] = 1 / (factorial * factorial);
    }

    // Compute f(x) = g(x)^d mod x^(2n+1).
    Poly f; f[0] = 1;
    for (int k = 0; k < d; ++k)
    {
        f = product_mod(f, g, 2 * n + 1);
    }

    // Return [x^(2n)]f(x) * (n!)^2 / d^(2n).
    return f[2 * n] * (factorial * factorial) / d_2n;
}

int main()
{
    std::cout << p(60, 5) << std::endl;
}

This yields, for example, p(60,5) equals approximately 1/10254. In other words, buy two 60-candy packs, and observe whether they are identical. Then buy another pair of 60-candy packs, etc., until you find an identical pair. You should expect to buy about 10,254 pairs of packs on average.

(Aside: This is effectively the same problem as discussed here a couple of years ago.)

However, we must again account for the variability in total number n of candies per pack. That is, p(n,5) is the probability that two packs are identical, conditioned on them both containing exactly n candies. In question (1), we only needed the maximum range of possible values of n, but here we need the actual probability density f(n), so that we can integrate f(n)^2 p(n,5) over all possible n.

Fortunately, the variance of the approximately normally distributed pack size (we can be reasonably confident in the mean of 60) doesn’t change the answer much: the probability that two randomly purchased packs– of possibly different sizes– will be identical is somewhere between about 1/100,000 and 1/200,000.

Expected number of packs until first duplicate

Finally, question (3) is essentially a messy birthday problem: instead of insisting that a particular pair of Skittles packs be identical, how many packs must we buy on average for some pair to be identical? The numbers from question (2) in the hundred thousands may seem large, but a back-of-the-envelope square-root estimate of “only” about 400-500 packs to find a duplicate was, to me, surprisingly small. This is confirmed by simulation: let’s assume that every pack of Skittles independently contains a (100,0.6)-binomially distributed number of candies– for a mean of 60 candies per pack– with each individual candy’s color independently uniformly distributed. We repeatedly buy packs until we first encounter one that is identical to one we have previously purchased. The figure below shows the distribution of the number of packs we need to buy, based on one million Monte Carlo simulations, where the mean of 524 packs is shown in red.

Distribution of number of packs needed until encountering a first duplicate.

A box of 36 packs of Skittles costs about 21 dollars… so an experiment to search for two identical packs should only cost about $300… on average.

Posted in Uncategorized | Leave a comment

Random drug testing in the NFL

Introduction

“This is supposed to be a random system. It doesn’t feel very random.” – Eric Reid, as quoted on Twitter

Last week, Eric Reid was selected to take his fifth random drug test in eight weeks with the Carolina Panthers. That might seem like a lot. It might even seem like Reid is perhaps the target of extra scrutiny by the NFL, particularly given his social activism, viewed as controversial by some, and his involvement in a current collusion lawsuit against the league.

So is the drug testing really random, or is Reid justified in his complaint? From the NFL Players Association Policy on Performance-Enhancing Substances:

“Each week during the preseason and regular season, ten (10) Players on every Club will be tested. By means of a computer program, the Independent Administrator will randomly select the Players to be tested from the Club’s active roster, practice squad list, and reserve list who are not otherwise subject to ongoing reasonable cause testing for performance-enhancing substances.”

As will be shown shortly, this is a pretty straightforward example of our very human habit of perceiving patterns where only randomness exists. But there is an interesting mathematical problem buried here as well, challenging enough that I can only provide an approximate solution.

Let’s make the setup more precise: suppose that for each of n=8 weeks we select, with replacement, a random subset of s=10 of t=72 players on a team to take a drug test. (Reid only signed with the Panthers eight weeks ago, and I am assuming that the 72 players comprise 53 active, 10 practice, and 9 reserve.) There are three reasonable questions to ask:

  1. What is the probability that a particular player (e.g., Eric Reid) will be selected for testing m=5 or more times over this time period?
  2. What is the probability that at least one player on the team (i.e., not necessarily Reid) will be selected for testing m=5 or more times?
  3. What is the probability that at least one player in the 32-team league will be selected for testing m=5 or more times?

Question 1: P(Eric Reid is selected 5 or more times)

The first question is easy to answer, and is unfortunately the only question asked in most of the popular press. The probability of being selected in any single week is s/t, and so the probability of being selected at least m times in n weeks is

q = \sum_{k=m}^n {n \choose k} (\frac{s}{t})^k (1-\frac{s}{t})^{n-k}

which equals approximately 0.002, or only slightly more than one chance in 500.

But if there is a moral to this story, it’s this: You will almost certainly not win the lottery… but almost certainly someone will. That is, the second (or really the third) question is the right one to ask: what is the probability that some player will be selected so many times?

Question 2: P(some Carolina player is selected 5 or more times)

This is the hard problem that motivated this post. There is some similarity to the “Double Dixie Cup” version of the coupon collector problem with group drawings, where the players are coupons, but instead of requiring at least m copies of each coupon in n drawings, here we ask for at least m copies of at least one coupon (or the complementary equivalent, for at most m-1 copies of each coupon).

If we define the generating function

g(x_1,x_2,...,x_n) = \prod_{k=1}^n (1+x_k)

and h(\cdot) to be the expansion of g(\cdot) with all terms removed where the sum of exponents is at least m, then the desired probability may be expressed as

p = 1-\frac{[\prod_{k=1}^n x_k^s] h(\cdot)^t}{[\prod_{k=1}^n x_k^s] g(\cdot)^t}

where the denominator is simply {t \choose s}^n. But unfortunately I don’t see a computationally feasible way to evaluate the coefficient in the numerator. Fortunately, we can get a good lower bound on p using inclusion-exclusion and Bonferroni’s inequality:

p \geq t q - \frac{{t \choose 2}}{{t \choose s}^n}\sum_{i=m}^n \sum_{j=m}^n \sum_{k=\max(0,i+j-n)}^{\min(i,j)} f(i,j,k)

f(i,j,k) = {n \choose k}{{n-k} \choose {i-k}} {{n-i} \choose {j-k}} {{t-2} \choose {s-2}}^k {{t-2} \choose {s-1}}^{i+j-2k} {{t-2} \choose s}^{n-i-j+k}

yielding a probability p \geq 0.136 that some Carolina player would be selected 5 or more times over 8 weeks.

Question 3: P(some NFL player is selected 5 or more times)

Finally, while this is happening, the other 31 teams in the league are subjecting their players to the same random drug testing procedure. The probability that some player on some team will experience 5 or more random drug tests over a span of 8 weeks is

1-(1-p)^{32} \geq 0.99

In other words, it is a near certainty that some player in the league would experience the number of random tests that Reid has. Indeed, by linearity of expectation, we should expect an average of 32 t q \approx 4.6 players to find themselves in a similar situation over a similar time period. How many players actually did experience multiple tests over the last couple of months would be interesting and useful data to add to the discussion.

Posted in Uncategorized | 3 Comments

Generating Mini-Crosswords

Introduction

The New York Times publishes “mini-crosswords,” which are crossword puzzles on a relatively small grid (usually 5×5), without any black squares, so that every row and column of the grid must spell a word. The figure below shows an example of a solution to such a puzzle.

How hard is it to create these puzzles? This post is motivated by a recent College Mathematics Journal article (see Reference (1) below) that considers this question, and describes an approach using the Metropolis-Hastings algorithm to randomly sample instances of puzzles.

But instead of just randomly sampling one puzzle at a time, can we actually enumerate all possible puzzles? In particular, my idea was to reduce the problem of finding a crossword puzzle solution to that of finding a (generalized) exact cover with appropriately crafted constraints. This would be handy, because we already have code for solving exact cover problems, using Knuth’s Dancing Links (DLX) algorithm (see here and here for similar past exercises).

Crossword as an exact cover

To state the problem more precisely: given a positive integer n indicating the size of the grid (n=5 in the above example), and a dictionary W of m words each of length n over alphabet \Sigma, we must construct an exact cover problem whose solution corresponds to a placement of letters in all n^2 grid positions such that each row and column spells a word in W.

To do this, we construct a zero-one matrix with 2mn rows, each corresponding to placing one of the m dictionary words either “across” (in one of the n rows of the puzzle) or “down” (in one of the n columns). A solution will consist of a subset of 2n rows of the matrix: n “across” words, one in each row of the puzzle, and n “down” words, one in each column, each pair of which intersect in the appropriate common letter.

To represent the constraints, we initially need 2n^2\cdot|\Sigma| columns in our matrix (where |\Sigma|=26), each indexed by a tuple (puzzle row i, puzzle column j, alphabet letter c, across or down). For a given matrix row– representing placement of a word with letters (w_1, w_2, ..., w_n) in a particular location and orientation in the puzzle grid– we set to one those n\cdot|\Sigma| columns corresponding to the n different (i,j) grid locations where the word will be placed in the puzzle… where for each alphabet letter c, we set the “across” column to one if and only if either

  • the word is “across” and c matches the letter of the word in this location (i.e., c=w_j), or
  • the word is “down” and c does not match the letter of the word in this location (i.e., c \neq w_i).

If neither of these conditions is satisfied, we instead set the “down” column to one. The following Python code produces the number and list of (row, column) pairs of the resulting sparse matrix.

letters = 'abcdefghijklmnopqrstuvwxyz'
m = len(words)
n = len(words[0])
print(m * n * 2 * (n * 26), file=file)
b = [True, False]
for row, ((w, word), k, across) in enumerate(
            product(enumerate(words), range(n), b)):
    for col, (i, j, letter, horiz) in enumerate(
            product(range(n), range(n), letters, b)):
        if ((i if across else j) == k and
            (word[j if across else i] == letter) == (across == horiz)):
            print(row, col, file=file)

However, we’re not quite finished. Although the desired end result is a crossword with distinct words in each row and column, such as the 6×6 solution shown in (a) below, as Howard observes in the CMJ article, there are many valid solutions that use the same word more than once, including the extreme cases of symmetricword squares” such as the one shown in (b) below.

(a) 6×6 mini-crossword, where the word in each row and column is unique. (b) 6×6 “word square,” a symmetric grid with each row and corresonding column containing the same word.

We can eliminate this duplication by adding m “optional” columns to the zero-one matrix, one for each word in the dictionary, and solve the resulting generalized exact cover problem, so that each word may be used at most once in a solution.

Results

All of the source code is available here, as well as on GitHub. My initial test used the 4×4 case discussed in Howard’s paper, with his dictionary of 1826 words. He describes a process for estimating the total number of possible puzzles by repeated sampling using the Markov chain Monte Carlo approach: “We estimate that there are approximately
73,000–74,000 distinct puzzles each with no repeated words.” This is pretty accurate; it turns out that there are exactly 74,339 (each contributing two symmetric pairs of solutions to the generalized exact cover problem, for a total of 148,678 solutions).

References:

  1. Howard, C. Douglas, It’s Puzzling, College Mathematics Journal, 49(4) September 2018, p. 242-249 [DOI]
  2. Knuth, D., Dancing Links, Millenial Perspectives in Computer Science, 2000, p. 187-214 (arXiv)

 

Posted in Uncategorized | Leave a comment

Disabling subnormals in MATLAB

Suppose that we want to compute the following expression, somewhat contrived in complexity for the purpose of example:

y = \frac{1}{s}\sum_{i=1}^{10^7} \frac{s}{2^{50}}

The following MATLAB code implements this formula, and measures the time required to evaluate it:

tic;
scale = 1;
y = 0;
for i = 1:10000000
    x = scale;
    for j = 1:50
        x = x * 0.5;
    end
    y = y + x;
end
y = y / scale;
toc

Now suppose that we execute this same code again, but this time changing the “scale” factor s to a much smaller value: scale = realmin, corresponding to s=2^{-1022}, the smallest positive normalized floating-point number. Inspection of the formula above suggests that the value of y should not depend on the changed value of s (as long as it is non-zero); we may spend most of our time working with much smaller numbers, but the end result should be the same.

And indeed, execution of the modified code confirms that we get exactly the same result… but it takes nearly 20 times longer to do so on my laptop than the original version with s=1. And there are more complex– and less contrived– calculations where the difference in performance is even greater.

The problem is that these smaller numbers are subnormal, small enough in magnitude to require a slightly different encoding than “normal” numbers which make up most of the range of representable floating-point numbers. Arithmetic operations can be significantly more expensive when required to recognize and manipulate this “special case” encoding.

Depending on your application, there may be several approaches to handling this problem:

  1. Rewrite your code to prevent encountering subnormals in the first place. In the above contrived example, this is easy to do: just shift the “scale,” or magnitude, of all values involved in the computation away from the subnormal range (and possibly shifting back only at the end if necessary). This can not only result in faster code, but more accurate results, since subnormal numbers have fewer “significant” mantissa bits in their representation.
  2. Disable subnormal numbers altogether, so that for any floating-point operation, input arguments or output results that would otherwise be treated as subnormal are instead “flushed” to zero.

We have seen above how to manage (1). For (2), the following MEX function does nothing but set the appropriate processor flags to disable subnormals. I have only tested this on Windows 7 with an Intel laptop, compiling in MATLAB R2017b with both Microsoft Visual Studio 2015 as well as the MinGW-w64 MATLAB Add-On (edit: a reader has also tried this on Linux Mint 19 with MATLAB R2018b and GCC 7.3.0):

#include <xmmintrin.h>
#include <pmmintrin.h>
#include "mex.h"

void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
    //_MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON);
    _mm_setcsr((_mm_getcsr() & ~0x8000) | (0x8000));
    //_MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_ON);
    _mm_setcsr((_mm_getcsr() & ~0x0040) | (0x0040));
}

After running this MEX function in a MATLAB session, re-running the modified calculation above gets all of the speed back… but now at a different cost: instead of the correct value y=10^7/2^{50}, every term in the sum has been flushed to zero, resulting in a final incorrect value of y=0.

If there is any moral to this story, it’s that you’re a test pilot. First, this was a very simple test setup; it’s an interesting question whether any MATLAB built-in functions might reset these flags back to the slower subnormal support, and whether it is feasible in your application to reset them back again, possibly repeatedly as needed. And second, even after any algorithm refactoring to minimize the introduction of subnormals, can your application afford the loss of accuracy resulting from flushing them to zero? MathWorks’ Cleve Moler seems to think the answer is always yes. I think the right answer is, it depends.

 

Posted in Uncategorized | Leave a comment

A lattice path puzzle

This past week’s Riddler puzzle on FiveThirtyEight asks for the number of different paths of minimum length from a starting intersection of city streets to a destination m blocks east and n blocks north. Put another way, moving on the 2D integer lattice graph, how many paths are there from the origin (0,0) to vertex (m,n) that are of minimum length?

Constraining the paths to minimum length greatly simplifies the problem. So let’s generalize, and instead ask for the number of paths from (0,0) to (m,n) of length k— so that the original problem asks for the particular case k=|m|+|n|, but what if we allow longer paths where we sometimes move in the “wrong” direction away from the destination?

I think this is a nice problem, with an elegant solution only slightly more complex than the original posed in the Riddler column. As a hint, the animation below visualizes the result, where the path length k increases with each frame, showing the probability distribution of the endpoint of a 2D random walk.

Probability distribution of endpoint of 2D lattice random walk, vs. number of steps.

Perhaps as another hint, note the checkerboard pattern to the distribution; only “half” of the vertices are reachable for a particular path length k, and which half is reachable alternates as k increases.

Posted in Uncategorized | Leave a comment

Arbitrary-precision rational arithmetic in C++

Introduction

This is a follow-up to a post from several years ago describing a C++ implementation of arbitrary-precision unsigned integer arithmetic. This weekend I extended this to also support arbitrary-precision signed integers and rational numbers. Although this started as an educational tool, it now feels a bit more complete, and actually usable for the combinatorics and probability applications of the sort that are frequently discussed here.

I tried to stick to the original objectives of relatively simple and hopefully readable code, with stand-alone, header-only implementation, as freely available in the public domain as legally possible.

The code is available here, as well as on GitHub, in three header files:

  • #include "math_Unsigned.h" defines a math::Unsigned type representing the natural numbers with all of the sensible arithmetic, bitwise, and relational operators, essentially everything except bitwise one’s complement… although more on this shortly.
  • #include "math_Integer.h" defines an Integer type with a sign-and-magnitude implementation in terms of Unsigned, with all corresponding operators, including bitwise operators having two’s complement semantics assuming “infinite sign extension.”
  • #include "math_Rational.h" defines a Rational type implemented in terms of Integer numerator and denominator.

This was a fun exercise; there were interesting challenges in developing each of the three classes. As discussed previously, the unsigned type handles the actual arbitrary-precision representation (implemented as a vector<uint32_t> of digits in base 2^{32}), where division is by far the most complex operation to implement efficiently.

The implementation of the signed integer type is relatively straightforward… except for the bitwise operators. Assuming a sign-and-magnitude representation (using an Unsigned under the hood), it is an interesting exercise to work out how to implement bitwise and, or, xor, and not, so that they have two’s complement semantics even for negative operands. In the process, I had to add an “AND NOT” operator to the original underlying unsigned type (there is actually a built-in operator &^ for this in Go).

With this machinery in place, the rational type is the simplest to implement. The only wrinkle here is that a few additional constructors are needed, since user-defined conversions from the more primitive integral types (e.g., Rational from Integer, Integer from int32_t, etc.) are only implicitly applied “one level deep.”

Example application: Are seven riffle shuffles enough?

To test and demonstrate use of these classes, consider riffle shuffling a standard poker deck of 52 playing cards. How many shuffles are sufficient to “fully randomize” the deck? A popular rule of thumb, attributed to Bayer and Diaconis, is that seven riffle shuffles are recommended. (See a longer list of references here, along with some simpler counting arguments that at least six shuffles are certainly necessary.)

This recommendation is based on analysis of the Gilbert-Shannon-Reeds model of a single riffle shuffle, and of the total variation distance between probability distributions Q^m and U, where Q^m is the distribution of arrangements of the deck after m GSR riffle shuffles, and U is the desired uniform distribution where every arrangement is equally likely. We can compute this total variation distance exactly as a function of the number m of shuffles, as demonstrated in the following example code:

#include "math_Rational.h"
#include <iostream>
using namespace math;

Integer factorial(int n)
{
    Integer f = 1;
    for (int k = 1; k <= n; ++k)
    {
        f *= k;
    }
    return f;
}

Integer binomial(int n, int k)
{
    if (0 <= k && k <= n)
    {
        return factorial(n) / factorial(k) / factorial(n - k);
    }
    else
    {
        return 0;
    }
}

Integer power(int base, int exp)
{
    Integer n = 1;
    for (int k = 0; k < exp; ++k)
    {
        n *= base;
    }
    return n;
}

Integer eulerian(int n, int k)
{
    Integer r = 0;
    for (int j = 0; j < k + 2; ++j)
    {
        r += (power(-1, j) * binomial(n + 1, j) * power(k + 1 - j, n));
    }
    return r;
}

Rational total_variation_distance(int cards, int shuffles)
{
    Rational q = 0;
    for (int r = 1; r <= cards; ++r)
    {
        Rational a = Rational(
            binomial((1 << shuffles) + cards - r, cards),
            Integer(1) << (cards * shuffles)) - Rational(1, factorial(cards));
        q += (eulerian(cards, r - 1) * (a < 0 ? -a : a));
    }
    return q / 2;
}

int main()
{
    int cards = 52;
    for (int shuffles = 0; shuffles <= 15; ++shuffles)
    {
        std::cout << shuffles << " " <<
            total_variation_distance(cards, shuffles).to_double() << std::endl;
    }
}

The following figure shows the results. Total variation distance ranges from a maximum of one (between discrete distributions with disjoint support) to a minimum of zero, in this case corresponding to an exactly uniform distribution of arrangements of the deck.

Total variation distance vs. number of GSR riffle shuffles of a standard 52-card deck.

We can see the sharp threshold behavior, where total variation distance transitions from near one to near zero over just a few shuffles, first dropping below 1/2 at seven shuffles.

Posted in Uncategorized | 4 Comments

Uncertainty in trading passengers for fuel

I had an interesting experience recently while preparing for a flight from Los Angeles to Baltimore. It was a completely full flight– initially, at least– with myself and 174 other passengers who had already boarded the Southwest 737-800, seemingly ready to push back and get on our way.

However, after a delay of several minutes, a flight attendant came on the PA and asked for two– specifically two– volunteers to give up their seat, in exchange for a flight later that afternoon. Two people immediately jumped up, left the airplane, and then we were ready to go… now with two empty seats.

The problem was weight: due to a changing forecast of bad weather, both in Baltimore and en route, we had taken on additional fuel at the last minute (e.g., to allow for diverting to a possibly now-more-distant alternate airport), resulting in the airplane exceeding its maximum takeoff weight. Something had to go, and apparently two passengers and their carry-on bags were a sufficient reduction in weight to allow us to take off.

What I found interesting about this episode was the relative precision of the change– 175 (or even 174) passengers bad, 173 passengers good– compared with the uncertainty in the total weight of the passengers, personal items, and carry-on bags remaining on board. That is, how does the airline know how much we weigh? Since Southwest does not ask individual passengers for their weight, let alone ask them to step on an actual scale prior to boarding, some method of estimation is required.

The FAA provides guidance on how to do this (see reference below): for large-cabin aircraft, the assumed average weight of an adult passenger, his or her clothing, personal items, and a carry-on bag is 190 pounds, with a standard deviation of 47 pounds. The figure below shows the resulting probability distribution of the total weight of all 175 passengers on the initially completely full flight:

Distribution of total weight of 175 passengers on a Southwest Boeing 737-800.

It’s worth noting that the referenced Advisory Circular does provide a more detailed breakdown of assumed average passenger weight, to account for season of travel (5 more pounds of clothing in the winter), gender, children vs. adults, and “nonstandard weight groups” such as sports teams, etc. But for this summer flight, with a relatively even split of male and female passengers, the only simplifying assumption in the above figure is no kids.

The point is that this seems like a significant amount of uncertainty in the actual total weight of the airplane, for less than 400 pounds to be the difference between “Nope, we’re overweight” and “Okay, we’re safe to take off.”

Reference:

  • Federal Aviation Administration Advisory Circular AC-120-27E, “Aircraft Weight and Balance Control,” 10 June 2005 [PDF]
Posted in Uncategorized | 4 Comments