Twisty lattice paths, all alike

I haven’t posted a puzzle in quite a while, so…

Every morning, I drive from my home at 1st and A to my work at 9th and J (8 blocks east and 9 blocks north), always heading either east or north so as not to backtrack… but I choose a random route each time, with all routes being equally likely.  How many turns should I expect to make?

And a follow-up: on busy city streets, left turns are typically much more time-consuming than right turns.  On average, how many left turns should I expect to make?

The first problem is presented in Zucker’s paper referenced below, and solved using induction on a couple of recurrence relations.  But it seems to me there is a much simpler solution, that more directly addresses the follow-up as well.

Reference:

  • Zucker, M., Lattice Paths and Harmonic Means, The College Mathematics Journal, 47(2) March 2016, p. 121-124 [JSTOR]

 

Posted in Uncategorized | 2 Comments

Rainbows

Recently my wife and I saw a double rainbow, shown below.  I’m a lousy photographer, but it’s there if you look closely, along the top of the picture:

rainbow_0

There are plenty of existing sources that describe the mathematics of rainbows: briefly, rays of sunlight (entering the figure below from the right) strike airborne droplets of water, bending due to refraction, reflecting off the inside of the droplet, and finally exiting “back” toward an observer.

rainbow_1

The motivation for this post is to describe the calculations involved in predicting where rainbows can be seen, but focusing on generalization: with just one equation, so to speak, we can let the sunlight bounce around a while inside the water droplet, and predict all of the different “higher-order” rainbows (double, tertiary, etc.) at once.  I think this is a nice “real” problem for students of differential calculus; its solution involves little more than the chain rule and some trigonometry.

rainbow_2

The figure above shows the basic setup, where angle \alpha \in (0, \pi/2) parameterizes the location where a ray of sunlight, entering horizontally from the right, strikes the surface of the spherical droplet.  The angle \beta is the corresponding normal angle of refraction, related by Snell’s law:

n = \frac{n_{water}}{n_{air}} = \frac{\sin \alpha}{\sin \beta}

where n_{air}=1.00029 and n_{water}=1.333 are the indices of refraction for air and water, respectively.  (Let’s defer for the moment the problem that these indices actually vary slightly with wavelength.)

As a function of \alpha, at what angle does the light ray exit the picture?  Starting with the angle \alpha shown in the center of the droplet, and “reflecting around” the center counterclockwise– and including the final additional refraction upon exiting the droplet– the desired angle \theta is

\theta(\alpha) = \alpha + (k+1)(\pi - 2\beta) + \alpha

where– and this is the generalization– the integer k indicates how many times we want the ray to reflect inside the droplet (k=1 in the figure above).  The figure below shows how this exit angle \theta varies with \alpha, for various numbers of internal reflections.

rainbow_3

The key observation is that for k>0, there is a critical minimum exit angle, in the neighborhood of which there is a concentration of light rays.  We can find this critical angle by minimizing \theta as a function of \alpha.  (Note that there is no critical angle when k=0, and so a rainbow requires at least one reflection inside the droplets.)

At this point, we have a fairly straightforward, if somewhat tedious, calculus problem: find \theta where \theta'(\alpha)=0.  It’s an exercise for the reader to show that this occurs at:

\alpha = \cos^{-1}\sqrt{\frac{n^2-1}{k(k+2)}}

\theta = 2\alpha + (k+1)(\pi - 2\beta)

Evaluating at k=1 yields an angle \theta of about 318 degrees.  This requires some interpretation: imagine a line between you and the Sun.  Then standing with the Sun behind you, a primary rainbow is visible along a cone making an angle of about 360-318=42 degrees with this line.

(Coming back now to the dependence of index of refraction on wavelength: although the variation is negligible for air, the index of refraction for water varies from about 1.33 for red light to about 1.34 for violet light, so that the rainbow is not just a band of brighter light, but a spread of colors from about 41.1 to 42.5 degrees… with the red light on the outside of the band.)

For k=2, the critical \theta is about 411 degrees, corresponding to a viewing angle of about 51 degrees… with two differences: first, the additional internal reflection means that the light strikes the bottom half of each droplet, making a complete “loop” around the inside of the droplet before exiting toward the observer.  Second, this opposite direction of reflection means that the bands of colors are reversed, with red on the inside of the band.

Finally, the angle for k=3 is about 498 degrees: such a tertiary rainbow may be seen facing the Sun (good luck with that), in a sort of “corona” around it at an angle of about 42 degrees.

 

 

Posted in Uncategorized | Leave a comment

Category theory without categories

(Following is an attempt to capture some of my thoughts following an interesting student discussion.  I think I mostly failed during that discussion, but it was fun.)

How do we know when two sets X and Y have the same “size”?  The layman answer is, “When they both have the same number of elements,” or even better, “When we can ‘pair up’ elements from X and Y, with each element a part of exactly one pair.”

The mathematician formalizes this idea as a bijection, that is, a function f:X \rightarrow Y that is both:

  • Injective, or “one-to-one”: if f(x_1)=f(x_2), then x_1 = x_2.  Intuitively, distinct elements of X map to distinct elements of Y.
  • Surjective, or “onto”: for every element y \in Y, there exists some x \in X such that f(x)=y.  Intuitively, we “cover” all of Y.

Bijections are handy tools in combinatorics, since they provide a way to count elements in Y, which might be difficult to do directly, by instead counting elements in X, which might be easier.

So far, so good.  Now suppose that a student asks why injectivity and surjectivity are the critical properties of functions that “pair up” elements of two sets?

This question struck a chord with me, because I remember struggling with it myself while learning introductory algebra.  The ideas behind these two properties seemed vaguely “dual” to me, but the structure of the actual definitions looked very different.  (I remember thinking that, if anything, the properties that most closely “paired” with one another were injective and well-defined (i.e., if x_1 = x_2, then f(x_1)=f(x_2); compare this with the above definition of an injection).  Granted, well-definedness might seem obvious most of the time, but sometimes an object can have many names.)

One way to answer such a question is to provide alternative definitions for these properties, that are equivalent to the originals, but where that elegance of duality is more explicit:

  • A function f:X \rightarrow Y is injective if and only if, for all functions g,h:W \rightarrow X, f \circ g = f \circ h implies g=h.
  • A function f:X \rightarrow Y is surjective if and only if, for all functions g,h:Y \rightarrow Z, g \circ f = h \circ f implies g=h.

It’s a great exercise to show that these new definitions are indeed equivalent to the earlier ones.  But if that’s the case, and if these are more pleasing to look at, then why don’t we always use/teach them instead?

I think there are at least two practical explanations for this.  First, these definitions are inconvenient.  That is, to use them to show that f:X \rightarrow Y is a bijection, we end up having to drag a third (and fourth) arbitrary hypothetical set W (and Z) into the mix, with corresponding arbitrary hypothetical functions to compose with f.  With the usual definitions, we can focus all of our attention solely on the two sets under consideration.

Second, these new definitions can be dangerous.  Often the function f we are trying to show is a bijection preserves some useful structural relationship between the sets X and Y (e.g., f may be a homomorphism)… but to use the new definitions, we must consider all possible additional functions g,h, not just that restricted class of functions that behaves similarly to f.  We have strayed into category theory, where monomorphisms and epimorphisms generalize the notions of injections and surjections, respectively.  I say “generalize,” because although for sets, these definitions are equivalent, in other categories (rings, for example), things get more complicated, where a monomorphism may not be injective, or (more frequently) an epimorphism may not be surjective.

 

Posted in Uncategorized | 2 Comments

Sump pump monitoring with a Raspberry Pi

I am not much of a mathematician, and even less of a programmer.  But I am at my most dangerous when wielding a screwdriver, attempting to be “handy.”  So this project was a lot of fun.

My house is situated on a hill, and the sump pump in my basement acts up seasonally, rain or shine, making me suspect a possible underground spring or stream.  To test this theory, my goal was to monitor the behavior of the sump pump, measure the rate at which water is ejected from the pit, and as a safety feature, to automatically send myself text messages in the event of any of several possible failure conditions (e.g., pump or float switch failure, excessive depth of water in the pit, etc.).

The setup is shown in the diagram below.  I was able to use mostly existing equipment from past educational projects, with the exception of some wiring, a length of PVC pipe, and an eTape liquid level sensor that I wanted to try out.  The sensor is a length of tape submerged in the sump pit (inside a PVC pipe to keep it vertical); it acts as a variable resistor, with the resistance decreasing linearly as the water rises in the pit.  Using a voltage divider circuit, the resistance is (indirectly) measured as a voltage drop across the tape, using a DI-145 analog-to-digital converter.  The A/D has a simple USB-serial output interface with a Raspberry Pi, where a few Python scripts handle the data recording, filtering, and messaging.

Raspberry Pi 3 monitoring DI-145 A/D converted voltage across eTape liquid level sensor.

Raspberry Pi 3 monitoring DI-145 A/D converted voltage across eTape liquid level sensor.

The figure below shows a sample of one hour’s worth of recorded data.  The y-axis is the voltage drop across the tape, where lower voltage indicates higher water depth.  Note that the resistance is linear in the water depth, but the voltage is not.

One hour of recorded voltage: raw 120 Hz input (green), filtered (red), with detected pump cycles (blue).

One hour of recorded voltage: raw 120 Hz input (green), filtered (red), with detected pump cycles (blue).

The green curve is the raw voltage; I’m not sure whether the noisy behavior is due to the tape sensor or the A/D.  Fortunately the DI-145 samples at 120 Hz (despite all documentation suggesting 240 Hz), so that a simple low-gain filter, shown in red, cleans things up enough to allow equally simple edge detections of the pump turning on, shown in blue.  (And this is after nearly two weeks of no rain.)  Based on those edge detections, the program sends me text messages with periodic updates, or alerts if the water depth gets too high, or the pump starts cycling too frequently, or abruptly stops cycling.

All of the Python code is available at the usual location here.

Posted in Uncategorized | Leave a comment

NCAA tournament brackets revisited

This is becoming an annual exercise.  Two years ago, I wrote about the probability of picking a “perfect” NCAA tournament bracket.  Last year, the topic was the impact of various systems for scoring brackets in office pools.

This year I just want to provide up-to-date historical data for anyone who might want to play with it, including all 32 seasons of the tournament in its current 64-team format, from 1985 to 2016.

(Before continuing, note that the 4 “play-in” games of the so-called “first” round are an abomination, and so I do not consider them here, focusing on the 63 games among the 64-team field.)

First, the data: the following 16×16 matrix indicates the number of regional games (i.e., prior to the Final Four) in which seed i beat seed j.  Note that the round in which each game was played is implied by the seed match-up (e.g., seeds 1 and 16 play in the first round, etc.).

   0  21  13  34  32   7   4  52  59   4   3  19   4   0   0 128
  23   0  25   2   0  23  54   2   0  27  12   1   0   0 120   0
   8  14   0   2   2  38   7   1   1   9  27   0   0 107   1   0
  15   4   3   0  36   2   2   3   2   2   0  23 102   0   0   0
   7   3   1  31   0   1   0   0   1   1   0  82  12   0   0   0
   2   6  28   1   0   0   4   0   0   4  82   0   0  14   0   0
   0  21   5   2   0   3   0   0   0  78   0   0   0   1   2   0
  12   3   0   5   2   1   1   0  64   0   0   0   1   0   0   0
   5   1   0   0   1   0   0  64   0   0   0   0   1   0   0   0
   1  18   4   0   0   2  50   0   0   0   1   0   0   1   5   0
   3   1  14   0   0  46   3   0   0   2   0   0   0   5   0   0
   0   0   0  12  46   0   0   1   0   0   0   0   8   0   0   0
   0   0   0  26   3   0   0   0   0   0   0   3   0   0   0   0
   0   0  21   0   0   2   0   0   0   0   0   0   0   0   0   0
   0   8   0   0   0   0   1   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

The following matrix, in the same format, is for the Final Four games:

  12   6   2   5   1   0   1   1   1   1   0   0   0   0   0   0
   4   3   3   1   0   1   0   0   0   0   1   0   0   0   0   0
   4   2   0   2   0   0   0   0   0   0   1   0   0   0   0   0
   1   0   0   1   1   0   0   0   0   0   0   0   0   0   0   0
   0   1   0   0   1   0   0   1   0   0   0   0   0   0   0   0
   0   1   0   1   0   0   0   0   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   2   0   0   0   0   0   0   0   0   1   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

Finally, the following matrix is for the championship games:

   6   6   1   2   3   1   0   0   0   0   0   0   0   0   0   0
   2   0   3   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   2   1   0   0   0   0   1   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0
   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0

We can update some of the past analysis using this new data as well.  For example, what is the probability of picking a “perfect” bracket, predicting all 63 games correctly?  As before, Schwertman (see reference below) suggests a couple of simple-but-reasonable models of the probability of seed i beating seed j given by

p_{i,j} = 1 - p_{j,i} = \frac{1}{2} + k(s_i - s_j)

where s_i is a measure of the “strength” of seed i, and k is a scaling factor controlling the range of resulting probabilities, in this case chosen so that p_{1,16}=129/130, the expected value of the corresponding beta distribution.

One simple strength function is s_i=-i, which yields an overall probability of a perfect chalk bracket of about 1 in 188 billion.  A slightly better historical fit is

s_i = \Phi^{-1}(1 - \frac{4i}{n}) 

where \Phi^{-1} is the quantile function of the normal distribution, and n=351 is the number of teams in Division I.  In this case, the estimated probability of a perfect bracket is about 1 in 91 billion.  In either case, a perfect bracket is far more likely– about 100 million times more likely– than the usually-quoted 1 in 9.2 quintillion figure that assumes all 2^{63} outcomes are equally likely.

References:

    1. Schwertman, N., McCready, T., and Howard, L., Probability Models for the NCAA Regional Basketball Tournaments, The American Statistician, 45(1) February 1991, p. 35-38 [PDF]
Posted in Uncategorized | Leave a comment

Analysis of (hexagonal) Battleship

Introduction

I played a lot of Battleship when I was a kid.  It’s a simple game, but one with potentially very complex optimal playing strategy.  Recently I encountered a variant of the game that Milton Bradley released in 2008 (now owned by Hasbro), using a hexagonal grid instead of the usual square 10×10 grid.  The motivation for this post is to compare the original and updated versions, demonstrating just how unfair the new version of the game might be.

Original Battleship

In both versions of the game, two players each place their own ships on a grid of cells hidden from view of the other player.  Players then alternate turns guessing cell locations, trying to hit and “sink” the other player’s ships.  The following figure shows an example of one player’s deployment in the original game; there are 5 ships of various lengths: a carrier (5 cells), battleship (4 cells), submarine (3 cells), cruiser (3 cells), and destroyer (2 cells).

Original Battleship grid for a single player with example deployment of ships.

Original Battleship grid for a single player with example deployment of ships.

A natural question to ask is, in how many possible ways can a player deploy his ships?  This is a known problem that has been solved many times before… but usually with special-purpose code implementing a backtracking search enumerating individual deployments.

Instead, we can re-use the implementation of Knuth’s “Dancing Links” (DLX) algorithm by casting the problem as an instance of a generalized exact cover (more on the generalization shortly).  Recall that an exact cover problem is a matrix of 0s and 1s, with a solution consisting of a subset of rows of the matrix containing exactly one 1 in each column.

To count Battleship deployments, very similar to counting Kanoodle puzzle solutions, there are two “kinds” of columns in our matrix: one column for each of the 5 ships, and one column for each of the 100 cells in the 10×10 grid.  Each row of the matrix represents a possible placement of one of the ships, with a single 1 in the corresponding “ship” column, and additional 1s in the corresponding occupied “cell” columns.  The resulting matrix has 105 columns and 760 rows, corresponding to placing each of the carrier, battleship, submarine, cruiser, and destroyer in 120, 140, 160, 160, and 180 ways, respectively.

However, unlike the Kanoodle puzzle, we don’t want an exact cover here, since that would effectively require that our 5 ships cover all 100 cells of the grid!  Instead, we want a “generalized” cover, in which some of the columns of the matrix are “optional” (Knuth calls them “secondary”), and may be covered at most once instead of exactly once.  In this case, the 5 ship columns are required/primary (we must use all of the ships), and the 100 cell columns are optional/secondary (we don’t have to cover every cell, but the ships can’t overlap, either).

Putting this all together, the following Python code specifies the details of the original Battleship game:

  • The board, specified as the coordinates of each of the cells in the 10×10 grid.
  • The size and shape of the 5 ship pieces, each in a default position and orientation.
  • The 4 possible rotations (in 90-degree increments) of each piece.
import numpy as np

def matrix_powers(R, n):
    """Return [R^1, R^2, ..., R^n]."""
    result = []
    a = np.array(R)
    for k in range(n):
        result.append(a)
        a = a.dot(R)
    return result

class Battleship:
    def __init__(self):
        self.board = {(x, y) for x in range(10) for y in range(10)}
        self.pieces = {'Carrier': {(x, 0) for x in range(5)},
                       'Battleship': {(x, 0) for x in range(4)},
                       'Submarine': {(x, 0) for x in range(3)},
                       'Cruiser': {(x, 0) for x in range(3)},
                       'Destroyer': {(x, 0) for x in range(2)}}
        self.rotations = matrix_powers(((0, -1), (1, 0)), 4)

Given such a specification of the details of the game, the following method constructs a sparse representation of the corresponding generalized exact cover matrix, by considering every possible ship piece, in every possible orientation, in every possible position on the board:

    def cover(self):
        """Return (pairs, optional_columns) for exact cover problem."""

        # Enumerate all possible placements/rotations of pieces.
        rows = set()
        for name, piece in self.pieces.items():
            for R in self.rotations:
                for offset in self.board:
                    occupied = {tuple(R.dot(x) + offset) for x in piece}
                    if occupied <= self.board:
                        rows.add((name, tuple(sorted(occupied))))

        # Convert placements to (row,col) pairs and optional column indices.
        cols = dict(enumerate(self.pieces))
        cols.update(enumerate(self.board, len(self.pieces)))
        cols = {v: k for k, v in cols.items()}
        pairs = []
        for i, row in enumerate(rows):
            name, occupied = row
            pairs.append((i, cols[name]))
            for x in occupied:
                pairs.append((i, cols[x]))
        return (pairs, list(range(len(self.pieces), len(cols))))

Plugging this into the C++ implementation of DLX (all of this code is available in the usual location here), we find, about 15 minutes later, that there are over 30 billion ways– 30,093,975,536, to be exact– for either player to place his 5 ships on the board in the original Battleship game.

Hexagonal Battleship

In the 2008 version of the game, there are several changes, as shown in the figure below.  Most immediately obvious is that the grid is no longer square, but hexagonal.  Also, some grid cells are “islands” (shown in brown) on which ships cannot be placed.  There are still 5 ships (shown in gray), but they have changed shape somewhat, no longer confined to straight lines of cells.

Finally, and most importantly, the two players each deploy their ships on two different “halves” of the board, with the “Blue” player’s ships on the top half, and the “Green” player’s ships on the bottom half.  (The figure shows a typical view of the board from Blue’s perspective.)

Hexagonal Battleship grid for both players (Blue and Green), with islands (brown) and example deployment of Blue's ships (gray).

Hexagonal Battleship grid for both players (Blue and Green), with islands (brown) and example deployment of Blue’s ships (gray).

Closer inspection of the board shows that the Blue and Green halves are almost symmetric… but not quite.  Each half has 79 grid cells on which to deploy ships (I’m guessing this explains the missing cell at bottom center), and 4 of the 5 islands in each half are exactly opposite their counterparts in the other half… but not everything lines up exactly.  This strongly suggests that one half allows more possible ship deployments than the other (can you guess which just by looking?), which in turn suggests that that player has at least a slight advantage in the game.

Hexagonal coordinates

We can count possible ship deployments in the same way, using the same code, as in the original game described above.  The only catch is the hexagonal arrangement of grid cells.  To handle this, we just need a coordinate system for specifying cell locations that may at first seem unnecessarily complicated: let’s view our two-dimensional grid as being embedded in three dimensions.

Specifically, consider the points with integer coordinates in the plane x+y+z=0.  Imagine viewing this plane in two dimensions by looking down along the plane’s normal vector (1,1,1)^T toward the origin.  The points in this plane with integer coordinates form a triangular lattice; they are the centers of each hexagonal grid cell.  In the figure above, each hexagonal grid cell is shown with the coordinates of its center point, with the origin at the center of the board.

This is a handy representation, since these points form a vector space (more precisely, a module), where translations (i.e., moving ships from their “default” location to somewhere on the board) and rotations (i.e., orientations of ships) correspond to vector addition and matrix multiplication, respectively.

The following Python code uses these hexagonal coordinates to define the board, ship pieces, and 6 possible rotations for either the Blue (is_upper=True) or Green (is_upper=False) player in the 2008 version of Battleship:

class HexBattleship(Battleship):
    def __init__(self, is_upper=True):
        if is_upper:
            islands = {(-5, 4, 1), (-1, 3, -2), (-1, 7, -6), (4, 2, -6),
                       (5, -1, -4)}
        else:
            islands = {(-5, 1, 4), (-1, -6, 7), (-1, -2, 3), (2, -1, -1),
                       (4, -6, 2)}
        self.board = {(x, y, -x - y)
                      for x in range(-7, 8)
                      for y in range(max(-7 - x, -7) + 1 - abs(np.sign(x)),
                                     min(7 - x, 7) + 1)
                      if (6 * y >= -3 * x + x % 4) == is_upper and
                      not (x, y, -x - y) in islands}
        self.pieces = {'Carrier': {(x, 0, -x) for x in range(3)} |
                                  {(x, -1, -x + 1) for x in range(1, 3)},
                       'Battleship': {(x, 0, -x) for x in range(4)},
                       'Submarine': {(x, 0, -x) for x in range(3)},
                       'Destroyer': {(x, 0, -x) for x in range(2)},
                       'Weapons': {(0, 0, 0), (1, 0, -1), (0, 1, -1)}}
        self.rotations = matrix_powers(((0, -1, 0), (0, 0, -1), (-1, 0, 0)), 6)

Running DLX again on each of the resulting matrices, the upper Blue board allows 17,290,404,311 possible deployments, while the lower Green board allows 21,625,126,041– over 25% more than Blue!  So it seems like it would be a definite advantage to play Green.

Finally, one minor mathematical aside: note that the 60-degree rotation in the above code is specified by the matrix

R = \left(\begin{array}{ccc}0&-1&0\\ 0&0&-1\\ -1&0&0\end{array}\right)

This is convenient, since just like the 90-degree rotations in two dimensions in the original game, the effect is a simple cyclic permutation (and negation) of coordinates, which we could implement more directly without the matrix multiplication if desired.

However, this is cheating somewhat, since R is not actually a proper rotation!  Its determinant is -1; the “real” matrix rotating vectors by 60 degrees about the axis (1,1,1)^T is

R' = \frac{1}{3}\left(\begin{array}{ccc}2&-1&2\\ 2&2&-1\\ -1&2&2\end{array}\right)

It’s an exercise for the reader to see why the simpler form of R still works.

References:

  1. Knuth, D., Dancing Links, Millenial Perspectives in Computer Science, 2000, p. 187-214 (arXiv)
Posted in Uncategorized | 4 Comments

Home-field advantage

I encountered the following problem a few weeks ago: suppose that Alice and Bob want to play a match consisting of a series of games, where the first player to win n games wins the match.  This is often referred to as a “best-of-(2n-1) series,” with many examples in a variety of sports, such as the World Series in baseball (n=4), sets and matches in tennis, volleyball, etc.

There is a problem, though: each game must be played at either Alice’s or Bob’s home field, conferring a slight advantage to the corresponding player.  Let’s assume that the probability that Alice wins a game at home is p, and the probability that Alice wins a game away (i.e., at Bob’s home field) is q<p.

(Note that this asymmetry may arise due to something other than where each game is played.  In tennis, for example, the serving player has a significant advantage; even against an otherwise evenly-matched opponent (i.e., p+q=1), values of p may be as large as 0.9 at the highest levels of play; see Reference (1) below.)

What is the probability that Alice wins the overall match?  Of course, this probability surely depends on how Alice and Bob agree on who has “home-field advantage” for each game.  Let’s assume without loss of generality that Alice plays at home in the first game, and consider a few different possibilities for how the rest of the series plays out:

  1. Alternating: Alice plays at home in odd-numbered games, and away in even-numbered games.  (This is similar to a set in tennis.)
  2. Winner plays at home: The winner of the previous game has home-field advantage in the subsequent game.  (This is similar to a set in volleyball.)
  3. Loser plays at home: The loser of the previous game has home-field advantage in the subsequent game.
  4. Coin toss: After Alice’s first game at home, a fair coin toss determines home-field advantage for each subsequent game.

I’m sure there are other reasonable approaches I’m not thinking of as well.  It is an interesting exercise to compute the probability of Alice winning the match using each of these approaches.  Certainly they yield very different distributions of outcomes of games: for example, the following figure shows the distribution of number of games played in a World Series, between evenly-matched opponents with a “typical” home-field advantage of p=0.55.

Distribution of number of games played in a best-of-7 series with p=0.55, q=0.45.

Distribution of number of games played in a best-of-7 series with p=0.55, q=0.45.

The motivation for this post is the observation that, despite these differences, approaches (1), (2), and (3) all yield exactly the same overall probability of Alice winning the match!  This was certainly not intuitive to me.  And the rule determining home-field advantage does matter in general; for example, the coin toss approach in (4) yields a different overall probability of winning, and so lacks something that (1), (2), and (3) have in common.  Can you see what it is?

References:

  1. Newton, P. and Keller, J., Probability of Winning at Tennis I. Theory and Data, Studies in Applied Mathematics, 114(3) April 2005, p. 241-269 [PDF]
  2. Kingston, J. G., Comparison of Scoring Systems in Two-Sided Competitions, Journal of Combinatorial Theory (Series A), 20(3) May 1976, p. 357-362
  3. Anderson, C. L., Note on the Advantage of First Serve, Journal of Combinatorial Theory (Series A), 23(3) November 1977, p. 363
Posted in Uncategorized | Leave a comment