Serializing MATLAB data

Consider the following problem: given a value in the MATLAB programming language, can we serialize it into a sequence of bytes– suitable for, say, storage on disk– in a form that allows easy recovery of the exact original value?

Although I will eventually try to provide an actual solution, the primary motivation for this post is simply to point out some quirks and warts of the MATLAB language that make this problem surprisingly difficult to solve.

“Binary” serialization

Our problem requires a bit of clarification, since there are at least a couple of different reasonable use cases.  First, if we can work with a stream of arbitrary opaque bytes– for example, if we want to send and receive MATLAB data on a TCP socket connection– then there is actually a very simple and robust built-in solution… as long as we’re comfortable with undocumented functionality.  The function b=getByteStreamFromArray(v) converts a value to a uint8 array of bytes, and v=getArrayFromByteStream(b) converts back.  This works on pretty much all types of data I can think of to test, even Java- and user-defined class instances.

Text serialization

But what if we would like something human-readable (and thus potentially human-editable)?  That is, we would like a function similar to Python’s repr, that converts a value to a char string representation, so that eval(repr(v)) “equals” v.  (I say “‘equals'” because even testing such a function is hard to do in MATLAB.  I suppose the built-in function isequaln is the closest approximation to what we’re looking for, but it ignores type information, so that isequaln(int8(5), single(5)), for example.)

Without further ado, following is my attempt at such an implementation, to use as you wish:

function s = repr(v)
%REPR Return string representation of value such that eval(repr(v)) == v.
%
%   Class instances, NaN payloads, and function handle closures are not
%   supported.

    if isstruct(v)
        s = sprintf('cell2struct(%s, %s)', ...
            repr(struct2cell(v)), repr(fieldnames(v)));
    elseif isempty(v)
        sz = size(v);
        if isequal(sz, [0, 0])
            if isa(v, 'double')
                s = '[]';
            elseif ischar(v)
                s = '''''';
            elseif iscell(v)
                s = '{}';
            else
                s = sprintf('%s([])', class(v));
            end
        elseif isa(v, 'double')
            s = sprintf('zeros(%s)', mat2str(sz, 17));
        elseif iscell(v)
            s = sprintf('cell(%s)', mat2str(sz, 17));
        else
            s = sprintf('%s(zeros(%s))', class(v), mat2str(sz, 17));
        end
    elseif ~ismatrix(v)
        nd = ndims(v);
        s = sprintf('cat(%d, %s)', nd, strjoin(cellfun(@repr, ...
            squeeze(num2cell(v, 1:(nd - 1))).', ...
            'UniformOutput', false), ', '));
    elseif isnumeric(v)
        if ~isreal(v)
            s = sprintf('complex(%s, %s)', repr(real(v)), repr(imag(v)));
        elseif isa(v, 'double')
            s = strrep(repr_matrix(@arrayfun, ...
                @(x) char(java.lang.Double.toString(x)), v, ...
                '[%s]', '%s'), 'inity', '');
        elseif isfloat(v)
            s = strrep(repr_matrix(@arrayfun, ...
                @(x) char(java.lang.Float.toString(x)), v, ...
                '[%s]', 'single(%s)'), 'inity', '');
        elseif isa(v, 'uint64') || isa(v, 'int64')
            t = class(v);
            s = repr_matrix(@arrayfun, ...
                @(x) sprintf('%s(%s)', t, int2str(x)), v, '[%s]', '%s');
        else
            s = mat2str(v, 'class');
        end
    elseif islogical(v) || ischar(v)
        s = mat2str(v);
    elseif iscell(v)
        s = repr_matrix(@cellfun, @repr, v, '%s', '{%s}');
    elseif isa(v, 'function_handle')
        s = sprintf('str2func(''%s'')', func2str(v));
    else
        error('Unsupported type.');
    end
end

function s = repr_matrix(map, repr_scalar, v, format_matrix, format_class)
    s = strjoin(cellfun(@(row) strjoin(row, ', '), ...
        num2cell(map(repr_scalar, v, 'UniformOutput', false), 2).', ...
                                     'UniformOutput', false), '; ');
    if ~isscalar(v)
        s = sprintf(format_matrix, s);
    end
    s = sprintf(format_class, s);
end

That felt like a lot of work… and that’s only supporting the “plain old data” types: struct and cell arrays, function handles, logical and character arrays, and the various floating-point and integer numeric types.  As the help indicates, Java and classdef instances are not supported.  A couple of other cases are only imperfectly handled as well, as we’ll see shortly.

Struct arrays

The code starts with struct arrays.  The tricky issue here is that struct arrays can not only be “empty” in the usual sense of having zero elements, but also– independently of whether they are empty– they can have no fields.  It turns out that the struct constructor, which would work fine for “normal” structures with one or more fields, has limited expressive power when it comes to field-less struct arrays: unless the size is 1×1 or 0x0, some additional concatenation or reshaping is required.  Fortunately, cell2struct handles all of these cases directly.

Multi-dimensional arrays

Next, after handling the tedious cases of empty arrays of various types, the ~ismatrix(v) test handles multi-dimensional arrays– that is, arrays with more than 2 dimensions.  I could have handled this with reshape instead, but I think this recursive concatenation approach does a better job of preserving the “visual shape” of the data.

In the process of testing this, I learned something interesting about multi-dimensional arrays: they can’t have trailing singleton dimensions!  That is, there are 1×1 arrays, and 2×1 arrays, even 1x2x3 and 2x1x3 arrays… but no matter how hard I try, I cannot construct an mxnx1 array, or an mxnxkx1 array, etc.  MATLAB seems to always “squeeze” trailing singleton dimensions automagically.

Numbers

The isnumeric(v) section is what makes this problem almost comically complicated.  There are 10 different numeric types in MATLAB: double and single precision floating point, and signed and unsigned 8-, 16-, 32-, and 64-bit integers.  Serializing arrays of these types should be the job of the built-in function mat2str, which we do lean on here, but only for the shorter integer types, since it fails in several ways for the other numeric types.

First, the nit-picky stuff: I should emphasize that my goal is “round-trip” reproducibility; that is, after converting to string and back, we want the underlying bytes representing the numeric values to be unchanged.  Precision is one issue: for some reason, MATLAB’s default seems to be 15 decimal digits, which isn’t enough– by two— to accurately reproduce all double precision values.  Granted, this is an optional argument to mat2str, which effectively uses sprintf('%.17g',x) under its hood, but Java’s algorithm does a better job of limiting the number of digits that are actually needed for any given value.

Other reasons to bypass mat2str are that (1) for some reason it explicitly “erases” negative zero, and (2) it still doesn’t quite accurately handle complex numbers involving NaNalthough it has improved in recent releases.  Witness eval(mat2str(complex(0, nan))), for example.  (My implementation isn’t perfect here, either, though; there are multiple representations of NaN, but this function strips any payload.)

But MATLAB’s behavior with 64-bit integer types is the most interesting of all, I think.  Imagine things from the parser’s perspective: any numeric literal defaults to double precision, which, without a decimal point or fractional part, we can think of as “almost” an int54.  There is no separate syntax for integer literals; construction of “literal” values of the shorter (8-, 16-, and 32-bit) integer types effectively casts from that double-precision literal to the corresponding integer type.

But for uint64 and int64, this doesn’t work… and for a while (until around R2010a), it really didn’t work– there was no way to directly construct a 64-bit integer larger than 2^53, if it wasn’t a power of two!

This behavior has been improved somewhat since then, but at the expense of added complexity in the parser: the expression [u]int64(expr) is now a special case, as long as expr is an integer literal, with no arithmetic, imaginary part, etc.  Even so much as a unary plus will cause a fall back to the usual cast-from-double.  (It appears that Octave, at least as of version 4.0.3, has not yet worked this out.)

The effect on this serialization function is that we have to wrap that explicit uint64 or int64 construction around each individual integer scalar, instead of a single cast of the entire array expression as we can do with all of the other numeric types.

Function handles

Finally, function handles are also special.  First, they must be scalar (i.e., 1×1), most likely due to the language syntax ambiguity between array indexing and function application.  But function handles also can have workspace variables associated with them– usually when created anonymously– and although an existing function handle and its associated workspace can be inspected, there does not appear to be a way to create one from scratch in a single evaluatable expression.

 

Posted in Uncategorized | 3 Comments

IBM Research Ponder This: August 2016 Puzzle

This month’s IBM Research puzzle is pretty interesting: it sets the stage by describing a seemingly much simpler problem, then asking for a solution to a more complicated variant.  But even that simpler problem is worth a closer look.

I won’t spoil the “real” problem here– actually, I won’t approach that problem at all.  Instead, let’s start at the very beginning:

Problem 1: You have 10 bags, each with 1000 gold coins.  Each coin should weigh exactly 10 grams, but one of the bags contains all counterfeit coins, weighing only 9 grams each.  You have a scale on which you can accurately measure the weight of any number of coins.  With just one weighing, how can you determine which bag contains the counterfeit coins?

This is a standard, job-interview-ish, “warm-up” version of the problem, with the following solution: arbitrarily number the bags 1 through 10, and take k coins from bag k, for a total of 55 coins.  If all of the coins were genuine, the total weight would be 550 grams; the measured deficit from this weight indicates the number of the counterfeit bag.

So far, so good.  The IBM Research version of the problem extends this in two ways (and eventually a third): first, what happens if more than one bag– or possibly none of the bags– might be counterfeit?  And second, what if there is a limited number n of coins available in each bag (where, for example, n=1000 above)?  From the IBM Research page:

If n \geq 1024 then one can identify the [subset of] counterfeit bags using a single measurement with an accurate weighing scale.  (How?)

That is, instead of knowing that exactly one bag is counterfeit, any of the 2^{10}=1024 possible subsets of bags may be counterfeit, and our objective is to determine which subset, with just a single weighing of coins carefully chosen from each bag.

The motivation for this post is that, although the statement above seems to telegraph the intended elegant solution to this “warm-up” version of the problem, (a) that intended solution can actually be implemented by requiring only n=512 coins in each bag, not 1024; and (b) even this stronger bound is not tight– that is, even before approaching the “real” problem asked in this month’s IBM Research puzzle, the following simpler variant is equally interesting:

Problem 2: What is the minimum value of n (i.e., the minimum number of coins in each bag), for which a single weighing is sufficient to determine the subset of bags containing counterfeit coins?

Hint: I ended up tackling this by writing code to solve smaller versions of the problem, then leaning on the OEIS to gain insight into the general solution.

[Edit: The puzzle statement on the IBM Research page has since been updated to reflect that the “nice”– but still not tight– bound is 512, not 1024.]

 

Posted in Uncategorized | 4 Comments

Twisty lattice paths, all alike

I haven’t posted a puzzle in quite a while, so…

Every morning, I drive from my home at 1st and A to my work at 9th and J (8 blocks east and 9 blocks north), always heading either east or north so as not to backtrack… but I choose a random route each time, with all routes being equally likely.  How many turns should I expect to make?

And a follow-up: on busy city streets, left turns are typically much more time-consuming than right turns.  On average, how many left turns should I expect to make?

The first problem is presented in Zucker’s paper referenced below, and solved using induction on a couple of recurrence relations.  But it seems to me there is a much simpler solution, that more directly addresses the follow-up as well.

Reference:

  • Zucker, M., Lattice Paths and Harmonic Means, The College Mathematics Journal, 47(2) March 2016, p. 121-124 [JSTOR]

 

Posted in Uncategorized | 2 Comments

Rainbows

Recently my wife and I saw a double rainbow, shown below.  I’m a lousy photographer, but it’s there if you look closely, along the top of the picture:

rainbow_0

There are plenty of existing sources that describe the mathematics of rainbows: briefly, rays of sunlight (entering the figure below from the right) strike airborne droplets of water, bending due to refraction, reflecting off the inside of the droplet, and finally exiting “back” toward an observer.

rainbow_1

The motivation for this post is to describe the calculations involved in predicting where rainbows can be seen, but focusing on generalization: with just one equation, so to speak, we can let the sunlight bounce around a while inside the water droplet, and predict all of the different “higher-order” rainbows (double, tertiary, etc.) at once.  I think this is a nice “real” problem for students of differential calculus; its solution involves little more than the chain rule and some trigonometry.

rainbow_2

The figure above shows the basic setup, where angle \alpha \in (0, \pi/2) parameterizes the location where a ray of sunlight, entering horizontally from the right, strikes the surface of the spherical droplet.  The angle \beta is the corresponding normal angle of refraction, related by Snell’s law:

n = \frac{n_{water}}{n_{air}} = \frac{\sin \alpha}{\sin \beta}

where n_{air}=1.00029 and n_{water}=1.333 are the indices of refraction for air and water, respectively.  (Let’s defer for the moment the problem that these indices actually vary slightly with wavelength.)

As a function of \alpha, at what angle does the light ray exit the picture?  Starting with the angle \alpha shown in the center of the droplet, and “reflecting around” the center counterclockwise– and including the final additional refraction upon exiting the droplet– the desired angle \theta is

\theta(\alpha) = \alpha + (k+1)(\pi - 2\beta) + \alpha

where– and this is the generalization– the integer k indicates how many times we want the ray to reflect inside the droplet (k=1 in the figure above).  The figure below shows how this exit angle \theta varies with \alpha, for various numbers of internal reflections.

rainbow_3

The key observation is that for k>0, there is a critical minimum exit angle, in the neighborhood of which there is a concentration of light rays.  We can find this critical angle by minimizing \theta as a function of \alpha.  (Note that there is no critical angle when k=0, and so a rainbow requires at least one reflection inside the droplets.)

At this point, we have a fairly straightforward, if somewhat tedious, calculus problem: find \theta where \theta'(\alpha)=0.  It’s an exercise for the reader to show that this occurs at:

\alpha = \cos^{-1}\sqrt{\frac{n^2-1}{k(k+2)}}

\theta = 2\alpha + (k+1)(\pi - 2\beta)

Evaluating at k=1 yields an angle \theta of about 318 degrees.  This requires some interpretation: imagine a line between you and the Sun.  Then standing with the Sun behind you, a primary rainbow is visible along a cone making an angle of about 360-318=42 degrees with this line.

(Coming back now to the dependence of index of refraction on wavelength: although the variation is negligible for air, the index of refraction for water varies from about 1.33 for red light to about 1.34 for violet light, so that the rainbow is not just a band of brighter light, but a spread of colors from about 41.1 to 42.5 degrees… with the red light on the outside of the band.)

For k=2, the critical \theta is about 411 degrees, corresponding to a viewing angle of about 51 degrees… with two differences: first, the additional internal reflection means that the light strikes the bottom half of each droplet, making a complete “loop” around the inside of the droplet before exiting toward the observer.  Second, this opposite direction of reflection means that the bands of colors are reversed, with red on the inside of the band.

Finally, the angle for k=3 is about 498 degrees: such a tertiary rainbow may be seen facing the Sun (good luck with that), in a sort of “corona” around it at an angle of about 42 degrees.

 

 

Posted in Uncategorized | Leave a comment

Category theory without categories

(Following is an attempt to capture some of my thoughts following an interesting student discussion.  I think I mostly failed during that discussion, but it was fun.)

How do we know when two sets X and Y have the same “size”?  The layman answer is, “When they both have the same number of elements,” or even better, “When we can ‘pair up’ elements from X and Y, with each element a part of exactly one pair.”

The mathematician formalizes this idea as a bijection, that is, a function f:X \rightarrow Y that is both:

  • Injective, or “one-to-one”: if f(x_1)=f(x_2), then x_1 = x_2.  Intuitively, distinct elements of X map to distinct elements of Y.
  • Surjective, or “onto”: for every element y \in Y, there exists some x \in X such that f(x)=y.  Intuitively, we “cover” all of Y.

Bijections are handy tools in combinatorics, since they provide a way to count elements in Y, which might be difficult to do directly, by instead counting elements in X, which might be easier.

So far, so good.  Now suppose that a student asks why injectivity and surjectivity are the critical properties of functions that “pair up” elements of two sets?

This question struck a chord with me, because I remember struggling with it myself while learning introductory algebra.  The ideas behind these two properties seemed vaguely “dual” to me, but the structure of the actual definitions looked very different.  (I remember thinking that, if anything, the properties that most closely “paired” with one another were injective and well-defined (i.e., if x_1 = x_2, then f(x_1)=f(x_2); compare this with the above definition of an injection).  Granted, well-definedness might seem obvious most of the time, but sometimes an object can have many names.)

One way to answer such a question is to provide alternative definitions for these properties, that are equivalent to the originals, but where that elegance of duality is more explicit:

  • A function f:X \rightarrow Y is injective if and only if, for all functions g,h:W \rightarrow X, f \circ g = f \circ h implies g=h.
  • A function f:X \rightarrow Y is surjective if and only if, for all functions g,h:Y \rightarrow Z, g \circ f = h \circ f implies g=h.

It’s a great exercise to show that these new definitions are indeed equivalent to the earlier ones.  But if that’s the case, and if these are more pleasing to look at, then why don’t we always use/teach them instead?

I think there are at least two practical explanations for this.  First, these definitions are inconvenient.  That is, to use them to show that f:X \rightarrow Y is a bijection, we end up having to drag a third (and fourth) arbitrary hypothetical set W (and Z) into the mix, with corresponding arbitrary hypothetical functions to compose with f.  With the usual definitions, we can focus all of our attention solely on the two sets under consideration.

Second, these new definitions can be dangerous.  Often the function f we are trying to show is a bijection preserves some useful structural relationship between the sets X and Y (e.g., f may be a homomorphism)… but to use the new definitions, we must consider all possible additional functions g,h, not just that restricted class of functions that behaves similarly to f.  We have strayed into category theory, where monomorphisms and epimorphisms generalize the notions of injections and surjections, respectively.  I say “generalize,” because although for sets, these definitions are equivalent, in other categories (rings, for example), things get more complicated, where a monomorphism may not be injective, or (more frequently) an epimorphism may not be surjective.

 

Posted in Uncategorized | 2 Comments

Sump pump monitoring with a Raspberry Pi

I am not much of a mathematician, and even less of a programmer.  But I am at my most dangerous when wielding a screwdriver, attempting to be “handy.”  So this project was a lot of fun.

My house is situated on a hill, and the sump pump in my basement acts up seasonally, rain or shine, making me suspect a possible underground spring or stream.  To test this theory, my goal was to monitor the behavior of the sump pump, measure the rate at which water is ejected from the pit, and as a safety feature, to automatically send myself text messages in the event of any of several possible failure conditions (e.g., pump or float switch failure, excessive depth of water in the pit, etc.).

The setup is shown in the diagram below.  I was able to use mostly existing equipment from past educational projects, with the exception of some wiring, a length of PVC pipe, and an eTape liquid level sensor that I wanted to try out.  The sensor is a length of tape submerged in the sump pit (inside a PVC pipe to keep it vertical); it acts as a variable resistor, with the resistance decreasing linearly as the water rises in the pit.  Using a voltage divider circuit, the resistance is (indirectly) measured as a voltage drop across the tape, using a DI-145 analog-to-digital converter.  The A/D has a simple USB-serial output interface with a Raspberry Pi, where a few Python scripts handle the data recording, filtering, and messaging.

Raspberry Pi 3 monitoring DI-145 A/D converted voltage across eTape liquid level sensor.

Raspberry Pi 3 monitoring DI-145 A/D converted voltage across eTape liquid level sensor.

The figure below shows a sample of one hour’s worth of recorded data.  The y-axis is the voltage drop across the tape, where lower voltage indicates higher water depth.  Note that the resistance is linear in the water depth, but the voltage is not.

One hour of recorded voltage: raw 120 Hz input (green), filtered (red), with detected pump cycles (blue).

One hour of recorded voltage: raw 120 Hz input (green), filtered (red), with detected pump cycles (blue).

The green curve is the raw voltage; I’m not sure whether the noisy behavior is due to the tape sensor or the A/D.  Fortunately the DI-145 samples at 120 Hz (despite all documentation suggesting 240 Hz), so that a simple low-gain filter, shown in red, cleans things up enough to allow equally simple edge detections of the pump turning on, shown in blue.  (And this is after nearly two weeks of no rain.)  Based on those edge detections, the program sends me text messages with periodic updates, or alerts if the water depth gets too high, or the pump starts cycling too frequently, or abruptly stops cycling.

All of the Python code is available at the usual location here.

Posted in Uncategorized | Leave a comment

NCAA tournament brackets revisited

This is becoming an annual exercise.  Two years ago, I wrote about the probability of picking a “perfect” NCAA tournament bracket.  Last year, the topic was the impact of various systems for scoring brackets in office pools.

This year I just want to provide up-to-date historical data for anyone who might want to play with it, including all 32 seasons of the tournament in its current 64-team format, from 1985 to 2016.

(Before continuing, note that the 4 “play-in” games of the so-called “first” round are an abomination, and so I do not consider them here, focusing on the 63 games among the 64-team field.)

First, the data: the following 16×16 matrix indicates the number of regional games (i.e., prior to the Final Four) in which seed i beat seed j.  Note that the round in which each game was played is implied by the seed match-up (e.g., seeds 1 and 16 play in the first round, etc.).

   0  21  13  34  32   7   4  52  59   4   3  19   4   0   0 128
  23   0  25   2   0  23  54   2   0  27  12   1   0   0 120   0
   8  14   0   2   2  38   7   1   1   9  27   0   0 107   1   0
  15   4   3   0  36   2   2   3   2   2   0  23 102   0   0   0
   7   3   1  31   0   1   0   0   1   1   0  82  12   0   0   0
   2   6  28   1   0   0   4   0   0   4  82   0   0  14   0   0
   0  21   5   2   0   3   0   0   0  78   0   0   0   1   2   0
  12   3   0   5   2   1   1   0  64   0   0   0   1   0   0   0
   5   1   0   0   1   0   0  64   0   0   0   0   1   0   0   0
   1  18   4   0   0   2  50   0   0   0   1   0   0   1   5   0
   3   1  14   0   0  46   3   0   0   2   0   0   0   5   0   0
   0   0   0  12  46   0   0   1   0   0   0   0   8   0   0   0
   0   0   0  26   3   0   0   0   0   0   0   3   0   0   0   0
   0   0  21   0   0   2   0   0   0   0   0   0   0   0   0   0
   0   8   0   0   0   0   1   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

The following matrix, in the same format, is for the Final Four games:

  12   6   2   5   1   0   1   1   1   1   0   0   0   0   0   0
   4   3   3   1   0   1   0   0   0   0   1   0   0   0   0   0
   4   2   0   2   0   0   0   0   0   0   1   0   0   0   0   0
   1   0   0   1   1   0   0   0   0   0   0   0   0   0   0   0
   0   1   0   0   1   0   0   1   0   0   0   0   0   0   0   0
   0   1   0   1   0   0   0   0   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   2   0   0   0   0   0   0   0   0   1   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

Finally, the following matrix is for the championship games:

   6   6   1   2   3   1   0   0   0   0   0   0   0   0   0   0
   2   0   3   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   2   1   0   0   0   0   1   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

We can update some of the past analysis using this new data as well.  For example, what is the probability of picking a “perfect” bracket, predicting all 63 games correctly?  As before, Schwertman (see reference below) suggests a couple of simple-but-reasonable models of the probability of seed i beating seed j given by

p_{i,j} = 1 - p_{j,i} = \frac{1}{2} + k(s_i - s_j)

where s_i is a measure of the “strength” of seed i, and k is a scaling factor controlling the range of resulting probabilities, in this case chosen so that p_{1,16}=129/130, the expected value of the corresponding beta distribution.

One simple strength function is s_i=-i, which yields an overall probability of a perfect chalk bracket of about 1 in 188 billion.  A slightly better historical fit is

s_i = \Phi^{-1}(1 - \frac{4i}{n}) 

where \Phi^{-1} is the quantile function of the normal distribution, and n=351 is the number of teams in Division I.  In this case, the estimated probability of a perfect bracket is about 1 in 91 billion.  In either case, a perfect bracket is far more likely– about 100 million times more likely– than the usually-quoted 1 in 9.2 quintillion figure that assumes all 2^{63} outcomes are equally likely.

References:

    1. Schwertman, N., McCready, T., and Howard, L., Probability Models for the NCAA Regional Basketball Tournaments, The American Statistician, 45(1) February 1991, p. 35-38 [PDF]
Posted in Uncategorized | Leave a comment