Distribution and variance in blackjack

Introduction

Analysis of casino blackjack has been a recurring passion project of mine for nearly two decades now.  I have been back at it again for the last couple of months, this time working on computing the distribution (and thus also the variance) of the outcome of a round.  This post summarizes the initial results of that effort.  For reference up front, all updated source code is available here, with pre-compiled binaries for Windows in the usual location here.

Up to this point, the focus has been on accurate and efficient calculation of exact expected value (EV) of the outcome of a round, for arbitrary shoe compositions and playing strategies, most recently including index play using a specified card counting system.  This is sufficient for evaluating playing efficiency: that is, how close to “perfect play” (in the EV-maximizing sense) can be achieved solely by varying playing strategy?

But advantage players do not just vary playing strategy, they also– and arguably more importantly– have a betting strategy, wagering more when they perceive the shoe to be favorable, wagering less or not at all when the shoe is unfavorable.  So a natural next step is to evaluate such betting strategies… but to do so requires not just expected value, but also the variance of the outcome of each round.  This post describes the software updates for computing not just the variance, but the entire probability distribution of possible outcomes of a round of blackjack.

Rules of the game

For consistency in all discussion, examples, and figures, I will assume the following setup: 6 decks dealt to 75% penetration, dealer stands on soft 17, doubling down is allowed on any two cards including after splitting pairs, no surrender… and pairs may be split only once (i.e., to a maximum of two hands).  Note that this is almost exactly the same setup as the earlier analysis of card counting playing efficiency, with the exception of not re-splitting pairs: although we can efficiently compute exact expected values even when re-splitting pairs is allowed, computing the distribution in that case appears to be much harder.

Specifying (vs. optimizing) strategy

The distribution of outcomes of a round depends on not only the rule variations as described above, but also the playing strategy.  This is specified using the same interface as the original interface for computing expected value, which looks like this:

virtual int BJStrategy::getOption(const BJHand & hand, int upCard,
bool doubleDown, bool split, bool surrender);


The method parameters specify the “zero memory” information available to the player: the cards in the current hand, the dealer’s up card, and whether doubling down, splitting, or surrender is allowed in the current situation.  The return value indicates the action the player should take: whether to stand, hit, double down, split, or surrender.  For example, the following implementation realizes the simple– but poorly performing– “mimic the dealer” strategy:

virtual int getOption(const BJHand & hand, int upCard,
bool doubleDown, bool split, bool surrender) {
if (hand.getCount() < 17) {
return BJ_HIT;
} else {
return BJ_STAND;
}
}


When computing EV, instead of returning an explicit action, we can also return the code BJ_MAX_VALUE, meaning, “Take whatever action maximizes EV in this situation.”  For example, the default base class implementation of BJStrategy::getOption() always returns BJ_MAX_VALUE, meaning, “Compute optimal composition-dependent zero memory (CDZ-) strategy, maximizing overall EV for the current shoe.”

However, handling this special return value requires evaluating all possible subsequent hands that may result (i.e., from hitting, doubling down, splitting, etc.).  When only computing EV, we can do this efficiently, but for computing the distribution, an explicit specification of playing strategy is required.

The figure below shows a comparison of the resulting distribution of outcomes for these two example strategies.

Probability distribution of player outcomes of a unit wager on a single round from a full shoe, using “mimic the dealer” strategy (red) vs. optimal strategy (blue).

It is perhaps not obvious from this figure just how much worse “mimic the dealer” performs, yielding a house edge of 5.68% of initial wager; compare this with the house edge of just 0.457% for optimal CDZ- strategy.

Algorithm details

Blackjack is hard (i.e., interesting) because of splitting pairs.  It is manageably hard in the case of expected value, because expected value is linear: that is, if $X_1$ and $X_2$ are random variables indicating the outcome of the two “halves” of a split pair, then the expected value of the overall outcome, $E[X_1+X_2]$, is simply the sum of the individual expected values, $E[X_1]+E[X_2]$.  Better yet, when both halves of the split are resolved using the same playing strategy, those individual expected values are equal, so that we need only compute $2E[X_1]$.  (When pairs may be split and re-split, things get slightly more complicated, but the idea still applies.)

Variance, on the other hand, is not linear– at least when the summands are correlated, which they certainly are in the case of blackjack split hands.  So my first thought was to simply try brute force, recursively visiting all possible resolutions of a round.  Weighting each resulting outcome by its probability of occurrence, we can compute not just the variance, but the entire distribution of the outcome of the round.

This was unacceptably slow, taking about 5 minutes on my laptop to compute the distribution for CDZ- strategy from a full 6-deck shoe.  (Compare this with less than a tenth of a second to compute the CDZ- strategy itself and the corresponding overall EV.)  It was also not terribly accurate: the “direct” computation of overall EV provides a handy source of ground truth against which we can compare the “derived” overall EV using the distribution.  Because computing that distribution involved adding up over half a billion individual possible outcomes of a round, numerical error resulted in loss of nearly half of the digits of double precision.  A simple implementation of Kahan’s summation algorithm cleaned things up.

Two modifications to this initial approach resulted in a roughly 200-fold speedup, so that the current implementation now takes about 2.5 seconds to compute the distribution for the same full 6-deck optimal strategy.  First, the Steiner tree problem of efficiently calculating the probabilities of outcomes of the dealer’s hand can be split up into 10 individual trees, one for each possible up card.  This was an interesting trade-off: when you know you need all possible up cards for all possible player hands– as in the case of computing overall EV– there are savings to be gained by doing the computation for all 10 up cards at once.  But when computing the distribution, some player hands are only ever reached with some dealer up cards, so there is benefit in only doing the computation for those that you need… even if doing them separately actually makes some of that computation redundant.

Second, and most importantly, we don’t have to recursively evaluate every possible resolution of pair split hands.  The key observation is that (1) both halves of the split have the same set of possible hands– viewed as unordered subsets of cards– that may result; and (2) each possible selection of pairs of such resolved split hands may occur many times, but each with the same probability.  So we need only recursively traverse one half of any particular split hand, recording the number of times we visit each resolved hand subset.  Then we traverse the outer product of pairs of such hands, computing the overall outcome and “individual” probability of occurrence, multiplied by the number of orderings of cards that yield that pair of hands.

3:2 Blackjack isn’t always optimal

Finally, I found an interesting, uh, “feature,” in my original implementation of CDZ- strategy calculation.  In early testing of the distribution calculation, I noticed that the original direct computation of overall EV didn’t always agree with the EV derived from the computed distribution.  The problem turned out to be the “special” nature of a player blackjack, that is, drawing an initial hand of ten-ace, which pays 3:2 (when the dealer does not also have blackjack).

There are extreme cases where “standing” on blackjack is not actually optimal: for example, consider a depleted shoe with just a single ace and a bunch of tens.  If you are dealt blackjack, then you can do better than an EV of 1.5 from the blackjack payoff, by instead doubling down, guaranteeing a payoff of 2.0.  Although interesting, so far no big deal; the previous and current implementations both handle this situation correctly.

But now suppose that you are initially dealt a pair of tens, and optimal strategy is to split the pair, and you subsequently draw an ace to one of those split hands.  This hand does not pay 3:2 like a “normal” blackjack… but deciding what “zero memory” action to take may depend on when we correct the expected value of standing on ten-ace to its pre-split 3:2 payoff: in the previous implementation, this was done after the post-split EVs were computed, meaning that the effective strategy might be to double down on ten-ace after a split, but stand with the special 3:2 payoff on an “actual” blackjack.

Since this violates the zero memory constraint of CDZ-, the current updated implementation performs this correction before post-split EVs are computed, so that the ten-ace playing decision will be the same in both cases.  (Note that the strategy calculator also supports the more complex CDP1 and CDP strategies as well, the distinctions of which are for another discussion.)

What I found interesting was just how common this seemingly pathological situation is.  In a simulation of 100,000 shoes played to 75% penetration, roughly 14.5% of them involved depleted shoe compositions where optimal strategy after splitting tens and drawing an ace was to double down instead of stand.  As expected, these situations invariably occurred very close to the cut card, with depleted shoes still overly rich in tens.  (For one specific example, consider the shoe represented by the tuple (6, 6, 1, 5, 8, 2, 5, 9, 2, 34), indicating 6 aces, 6 twos, 34 tens, etc.)

Wrapping up, this post really just captures my notes on the assumptions and implementation details of the computation of the probability distribution of outcomes of a round of blackjack.  The next step is to look at some actual data, where one issue I want to focus on is a comparison of analysis approaches– combinatorial analysis (CA) and Monte Carlo simulation– their relative merits, and in particular, the benefits of combining the two approaches where appropriate.

Serializing MATLAB data

Consider the following problem: given a value in the MATLAB programming language, can we serialize it into a sequence of bytes– suitable for, say, storage on disk– in a form that allows easy recovery of the exact original value?

Although I will eventually try to provide an actual solution, the primary motivation for this post is simply to point out some quirks and warts of the MATLAB language that make this problem surprisingly difficult to solve.

“Binary” serialization

Our problem requires a bit of clarification, since there are at least a couple of different reasonable use cases.  First, if we can work with a stream of arbitrary opaque bytes– for example, if we want to send and receive MATLAB data on a TCP socket connection– then there is actually a very simple and robust built-in solution… as long as we’re comfortable with undocumented functionality.  The function b=getByteStreamFromArray(v) converts a value to a uint8 array of bytes, and v=getArrayFromByteStream(b) converts back.  This works on pretty much all types of data I can think of to test, even Java- and user-defined class instances.

Text serialization

But what if we would like something human-readable (and thus potentially human-editable)?  That is, we would like a function similar to Python’s repr, that converts a value to a char string representation, so that eval(repr(v)) “equals” v.  (I say “‘equals'” because even testing such a function is hard to do in MATLAB.  I suppose the built-in function isequaln is the closest approximation to what we’re looking for, but it ignores type information, so that isequaln(int8(5), single(5)), for example.)

Without further ado, following is my attempt at such an implementation, to use as you wish:

function s = repr(v)
%REPR Return string representation of value such that eval(repr(v)) == v.
%
%   Class instances, NaN payloads, and function handle closures are not
%   supported.

if isstruct(v)
s = sprintf('cell2struct(%s, %s)', ...
repr(struct2cell(v)), repr(fieldnames(v)));
elseif isempty(v)
sz = size(v);
if isequal(sz, [0, 0])
if isa(v, 'double')
s = '[]';
elseif ischar(v)
s = '''''';
elseif iscell(v)
s = '{}';
else
s = sprintf('%s([])', class(v));
end
elseif isa(v, 'double')
s = sprintf('zeros(%s)', mat2str(sz, 17));
elseif iscell(v)
s = sprintf('cell(%s)', mat2str(sz, 17));
else
s = sprintf('%s(zeros(%s))', class(v), mat2str(sz, 17));
end
elseif ~ismatrix(v)
nd = ndims(v);
s = sprintf('cat(%d, %s)', nd, strjoin(cellfun(@repr, ...
squeeze(num2cell(v, 1:(nd - 1))).', ...
'UniformOutput', false), ', '));
elseif isnumeric(v)
if ~isreal(v)
s = sprintf('complex(%s, %s)', repr(real(v)), repr(imag(v)));
elseif isa(v, 'double')
s = strrep(repr_matrix(@arrayfun, ...
@(x) regexprep(char(java.lang.Double.toString(x)), ...
'\.0$', ''), v, '[%s]', '%s'), 'inity', ''); elseif isfloat(v) s = strrep(repr_matrix(@arrayfun, ... @(x) regexprep(char(java.lang.Float.toString(x)), ... '\.0$', ''), v, '[%s]', 'single(%s)'), 'inity', '');
elseif isa(v, 'uint64') || isa(v, 'int64')
t = class(v);
s = repr_matrix(@arrayfun, ...
@(x) sprintf('%s(%s)', t, int2str(x)), v, '[%s]', '%s');
else
s = mat2str(v, 'class');
end
elseif islogical(v) || ischar(v)
s = mat2str(v);
elseif iscell(v)
s = repr_matrix(@cellfun, @repr, v, '%s', '{%s}');
elseif isa(v, 'function_handle')
s = sprintf('str2func(''%s'')', func2str(v));
else
error('Unsupported type.');
end
end

function s = repr_matrix(map, repr_scalar, v, format_matrix, format_class)
s = strjoin(cellfun(@(row) strjoin(row, ', '), ...
num2cell(map(repr_scalar, v, 'UniformOutput', false), 2).', ...
'UniformOutput', false), '; ');
if ~isscalar(v)
s = sprintf(format_matrix, s);
end
s = sprintf(format_class, s);
end


That felt like a lot of work… and that’s only supporting the “plain old data” types: struct and cell arrays, function handles, logical and character arrays, and the various floating-point and integer numeric types.  As the help indicates, Java and classdef instances are not supported.  A couple of other cases are only imperfectly handled as well, as we’ll see shortly.

Struct arrays

The code starts with struct arrays.  The tricky issue here is that struct arrays can not only be “empty” in the usual sense of having zero elements, but also– independently of whether they are empty– they can have no fields.  It turns out that the struct constructor, which would work fine for “normal” structures with one or more fields, has limited expressive power when it comes to field-less struct arrays: unless the size is 1×1 or 0x0, some additional concatenation or reshaping is required.  Fortunately, cell2struct handles all of these cases directly.

Multi-dimensional arrays

Next, after handling the tedious cases of empty arrays of various types, the ~ismatrix(v) test handles multi-dimensional arrays– that is, arrays with more than 2 dimensions.  I could have handled this with reshape instead, but I think this recursive concatenation approach does a better job of preserving the “visual shape” of the data.

In the process of testing this, I learned something interesting about multi-dimensional arrays: they can’t have trailing singleton dimensions!  That is, there are 1×1 arrays, and 2×1 arrays, even 1x2x3 and 2x1x3 arrays… but no matter how hard I try, I cannot construct an mxnx1 array, or an mxnxkx1 array, etc.  MATLAB seems to always “squeeze” trailing singleton dimensions automagically.

Numbers

The isnumeric(v) section is what makes this problem almost comically complicated.  There are 10 different numeric types in MATLAB: double and single precision floating point, and signed and unsigned 8-, 16-, 32-, and 64-bit integers.  Serializing arrays of these types should be the job of the built-in function mat2str, which we do lean on here, but only for the shorter integer types, since it fails in several ways for the other numeric types.

First, the nit-picky stuff: I should emphasize that my goal is “round-trip” reproducibility; that is, after converting to string and back, we want the underlying bytes representing the numeric values to be unchanged.  Precision is one issue: for some reason, MATLAB’s default seems to be 15 decimal digits, which isn’t enough– by two— to accurately reproduce all double precision values.  Granted, this is an optional argument to mat2str, which effectively uses sprintf('%.17g',x) under its hood, but Java’s algorithm does a better job of limiting the number of digits that are actually needed for any given value.

Other reasons to bypass mat2str are that (1) for some reason it explicitly “erases” negative zero, and (2) it still doesn’t quite accurately handle complex numbers involving NaN, although it has improved in recent releases.  Witness eval(mat2str(complex(0, nan))), for example.  (My implementation isn’t perfect here, either, though; there are multiple representations of NaN, but this function strips any payload.)

But MATLAB’s behavior with 64-bit integer types is the most interesting of all, I think.  Imagine things from the parser’s perspective: any numeric literal defaults to double precision, which, without a decimal point or fractional part, we can think of as “almost” an int54.  There is no separate syntax for integer literals; construction of “literal” values of the shorter (8-, 16-, and 32-bit) integer types effectively casts from that double-precision literal to the corresponding integer type.

But for uint64 and int64, this doesn’t work… and for a while (until around R2010a), it really didn’t work– there was no way to directly construct a 64-bit integer larger than 2^53, if it wasn’t a power of two!

This behavior has been improved somewhat since then, but at the expense of added complexity in the parser: the expression [u]int64(expr) is now a special case, as long as expr is an integer literal, with no arithmetic, imaginary part, etc.  Even so much as a unary plus will cause a fall back to the usual cast-from-double.  (It appears that Octave, at least as of version 4.0.3, has not yet worked this out.)

The effect on this serialization function is that we have to wrap that explicit uint64 or int64 construction around each individual integer scalar, instead of a single cast of the entire array expression as we can do with all of the other numeric types.

Function handles

Finally, function handles are also special.  First, they must be scalar (i.e., 1×1), most likely due to the language syntax ambiguity between array indexing and function application.  But function handles also can have workspace variables associated with them– usually when created anonymously– and although an existing function handle and its associated workspace can be inspected, there does not appear to be a way to create one from scratch in a single evaluatable expression.

Posted in Uncategorized | 6 Comments

IBM Research Ponder This: August 2016 Puzzle

This month’s IBM Research puzzle is pretty interesting: it sets the stage by describing a seemingly much simpler problem, then asking for a solution to a more complicated variant.  But even that simpler problem is worth a closer look.

I won’t spoil the “real” problem here– actually, I won’t approach that problem at all.  Instead, let’s start at the very beginning:

Problem 1: You have 10 bags, each with 1000 gold coins.  Each coin should weigh exactly 10 grams, but one of the bags contains all counterfeit coins, weighing only 9 grams each.  You have a scale on which you can accurately measure the weight of any number of coins.  With just one weighing, how can you determine which bag contains the counterfeit coins?

This is a standard, job-interview-ish, “warm-up” version of the problem, with the following solution: arbitrarily number the bags 1 through 10, and take $k$ coins from bag $k$, for a total of 55 coins.  If all of the coins were genuine, the total weight would be 550 grams; the measured deficit from this weight indicates the number of the counterfeit bag.

So far, so good.  The IBM Research version of the problem extends this in two ways (and eventually a third): first, what happens if more than one bag– or possibly none of the bags– might be counterfeit?  And second, what if there is a limited number $n$ of coins available in each bag (where, for example, $n=1000$ above)?  From the IBM Research page:

If $n \geq 1024$ then one can identify the [subset of] counterfeit bags using a single measurement with an accurate weighing scale.  (How?)

That is, instead of knowing that exactly one bag is counterfeit, any of the $2^{10}=1024$ possible subsets of bags may be counterfeit, and our objective is to determine which subset, with just a single weighing of coins carefully chosen from each bag.

The motivation for this post is that, although the statement above seems to telegraph the intended elegant solution to this “warm-up” version of the problem, (a) that intended solution can actually be implemented by requiring only $n=512$ coins in each bag, not 1024; and (b) even this stronger bound is not tight– that is, even before approaching the “real” problem asked in this month’s IBM Research puzzle, the following simpler variant is equally interesting:

Problem 2: What is the minimum value of $n$ (i.e., the minimum number of coins in each bag), for which a single weighing is sufficient to determine the subset of bags containing counterfeit coins?

Hint: I ended up tackling this by writing code to solve smaller versions of the problem, then leaning on the OEIS to gain insight into the general solution.

[Edit: The puzzle statement on the IBM Research page has since been updated to reflect that the “nice”– but still not tight– bound is 512, not 1024.]

Posted in Uncategorized | 4 Comments

Twisty lattice paths, all alike

I haven’t posted a puzzle in quite a while, so…

Every morning, I drive from my home at 1st and A to my work at 9th and J (8 blocks east and 9 blocks north), always heading either east or north so as not to backtrack… but I choose a random route each time, with all routes being equally likely.  How many turns should I expect to make?

And a follow-up: on busy city streets, left turns are typically much more time-consuming than right turns.  On average, how many left turns should I expect to make?

The first problem is presented in Zucker’s paper referenced below, and solved using induction on a couple of recurrence relations.  But it seems to me there is a much simpler solution, that more directly addresses the follow-up as well.

Reference:

• Zucker, M., Lattice Paths and Harmonic Means, The College Mathematics Journal, 47(2) March 2016, p. 121-124 [JSTOR]

Posted in Uncategorized | 2 Comments

Rainbows

Recently my wife and I saw a double rainbow, shown below.  I’m a lousy photographer, but it’s there if you look closely, along the top of the picture:

There are plenty of existing sources that describe the mathematics of rainbows: briefly, rays of sunlight (entering the figure below from the right) strike airborne droplets of water, bending due to refraction, reflecting off the inside of the droplet, and finally exiting “back” toward an observer.

The motivation for this post is to describe the calculations involved in predicting where rainbows can be seen, but focusing on generalization: with just one equation, so to speak, we can let the sunlight bounce around a while inside the water droplet, and predict all of the different “higher-order” rainbows (double, tertiary, etc.) at once.  I think this is a nice “real” problem for students of differential calculus; its solution involves little more than the chain rule and some trigonometry.

The figure above shows the basic setup, where angle $\alpha \in (0, \pi/2)$ parameterizes the location where a ray of sunlight, entering horizontally from the right, strikes the surface of the spherical droplet.  The angle $\beta$ is the corresponding normal angle of refraction, related by Snell’s law:

$n = \frac{n_{water}}{n_{air}} = \frac{\sin \alpha}{\sin \beta}$

where $n_{air}=1.00029$ and $n_{water}=1.333$ are the indices of refraction for air and water, respectively.  (Let’s defer for the moment the problem that these indices actually vary slightly with wavelength.)

As a function of $\alpha$, at what angle does the light ray exit the picture?  Starting with the angle $\alpha$ shown in the center of the droplet, and “reflecting around” the center counterclockwise– and including the final additional refraction upon exiting the droplet– the desired angle $\theta$ is

$\theta(\alpha) = \alpha + (k+1)(\pi - 2\beta) + \alpha$

where– and this is the generalization– the integer $k$ indicates how many times we want the ray to reflect inside the droplet ($k=1$ in the figure above).  The figure below shows how this exit angle $\theta$ varies with $\alpha$, for various numbers of internal reflections.

The key observation is that for $k>0$, there is a critical minimum exit angle, in the neighborhood of which there is a concentration of light rays.  We can find this critical angle by minimizing $\theta$ as a function of $\alpha$.  (Note that there is no critical angle when $k=0$, and so a rainbow requires at least one reflection inside the droplets.)

At this point, we have a fairly straightforward, if somewhat tedious, calculus problem: find $\theta$ where $\theta'(\alpha)=0$.  It’s an exercise for the reader to show that this occurs at:

$\alpha = \cos^{-1}\sqrt{\frac{n^2-1}{k(k+2)}}$

$\theta = 2\alpha + (k+1)(\pi - 2\beta)$

Evaluating at $k=1$ yields an angle $\theta$ of about 318 degrees.  This requires some interpretation: imagine a line between you and the Sun.  Then standing with the Sun behind you, a primary rainbow is visible along a cone making an angle of about 360-318=42 degrees with this line.

(Coming back now to the dependence of index of refraction on wavelength: although the variation is negligible for air, the index of refraction for water varies from about 1.33 for red light to about 1.34 for violet light, so that the rainbow is not just a band of brighter light, but a spread of colors from about 41.1 to 42.5 degrees… with the red light on the outside of the band.)

For $k=2$, the critical $\theta$ is about 411 degrees, corresponding to a viewing angle of about 51 degrees… with two differences: first, the additional internal reflection means that the light strikes the bottom half of each droplet, making a complete “loop” around the inside of the droplet before exiting toward the observer.  Second, this opposite direction of reflection means that the bands of colors are reversed, with red on the inside of the band.

Finally, the angle for $k=3$ is about 498 degrees: such a tertiary rainbow may be seen facing the Sun (good luck with that), in a sort of “corona” around it at an angle of about 42 degrees.

Category theory without categories

(Following is an attempt to capture some of my thoughts following an interesting student discussion.  I think I mostly failed during that discussion, but it was fun.)

How do we know when two sets $X$ and $Y$ have the same “size”?  The layman answer is, “When they both have the same number of elements,” or even better, “When we can ‘pair up’ elements from $X$ and $Y$, with each element a part of exactly one pair.”

The mathematician formalizes this idea as a bijection, that is, a function $f:X \rightarrow Y$ that is both:

• Injective, or “one-to-one”: if $f(x_1)=f(x_2)$, then $x_1 = x_2$.  Intuitively, distinct elements of $X$ map to distinct elements of $Y$.
• Surjective, or “onto”: for every element $y \in Y$, there exists some $x \in X$ such that $f(x)=y$.  Intuitively, we “cover” all of $Y$.

Bijections are handy tools in combinatorics, since they provide a way to count elements in $Y$, which might be difficult to do directly, by instead counting elements in $X$, which might be easier.

So far, so good.  Now suppose that a student asks why injectivity and surjectivity are the critical properties of functions that “pair up” elements of two sets?

This question struck a chord with me, because I remember struggling with it myself while learning introductory algebra.  The ideas behind these two properties seemed vaguely “dual” to me, but the structure of the actual definitions looked very different.  (I remember thinking that, if anything, the properties that most closely “paired” with one another were injective and well-defined (i.e., if $x_1 = x_2$, then $f(x_1)=f(x_2)$; compare this with the above definition of an injection).  Granted, well-definedness might seem obvious most of the time, but sometimes an object can have many names.)

One way to answer such a question is to provide alternative definitions for these properties, that are equivalent to the originals, but where that elegance of duality is more explicit:

• A function $f:X \rightarrow Y$ is injective if and only if, for all functions $g,h:W \rightarrow X$, $f \circ g = f \circ h$ implies $g=h$.
• A function $f:X \rightarrow Y$ is surjective if and only if, for all functions $g,h:Y \rightarrow Z$, $g \circ f = h \circ f$ implies $g=h$.

It’s a great exercise to show that these new definitions are indeed equivalent to the earlier ones.  But if that’s the case, and if these are more pleasing to look at, then why don’t we always use/teach them instead?

I think there are at least two practical explanations for this.  First, these definitions are inconvenient.  That is, to use them to show that $f:X \rightarrow Y$ is a bijection, we end up having to drag a third (and fourth) arbitrary hypothetical set $W$ (and $Z$) into the mix, with corresponding arbitrary hypothetical functions to compose with $f$.  With the usual definitions, we can focus all of our attention solely on the two sets under consideration.

Second, these new definitions can be dangerous.  Often the function $f$ we are trying to show is a bijection preserves some useful structural relationship between the sets $X$ and $Y$ (e.g., $f$ may be a homomorphism)… but to use the new definitions, we must consider all possible additional functions $g,h$, not just that restricted class of functions that behaves similarly to $f$.  We have strayed into category theory, where monomorphisms and epimorphisms generalize the notions of injections and surjections, respectively.  I say “generalize,” because although for sets, these definitions are equivalent, in other categories (rings, for example), things get more complicated, where a monomorphism may not be injective, or (more frequently) an epimorphism may not be surjective.

Posted in Uncategorized | 2 Comments

Sump pump monitoring with a Raspberry Pi

I am not much of a mathematician, and even less of a programmer.  But I am at my most dangerous when wielding a screwdriver, attempting to be “handy.”  So this project was a lot of fun.

My house is situated on a hill, and the sump pump in my basement acts up seasonally, rain or shine, making me suspect a possible underground spring or stream.  To test this theory, my goal was to monitor the behavior of the sump pump, measure the rate at which water is ejected from the pit, and as a safety feature, to automatically send myself text messages in the event of any of several possible failure conditions (e.g., pump or float switch failure, excessive depth of water in the pit, etc.).

The setup is shown in the diagram below.  I was able to use mostly existing equipment from past educational projects, with the exception of some wiring, a length of PVC pipe, and an eTape liquid level sensor that I wanted to try out.  The sensor is a length of tape submerged in the sump pit (inside a PVC pipe to keep it vertical); it acts as a variable resistor, with the resistance decreasing linearly as the water rises in the pit.  Using a voltage divider circuit, the resistance is (indirectly) measured as a voltage drop across the tape, using a DI-145 analog-to-digital converter.  The A/D has a simple USB-serial output interface with a Raspberry Pi, where a few Python scripts handle the data recording, filtering, and messaging.

Raspberry Pi 3 monitoring DI-145 A/D converted voltage across eTape liquid level sensor.

The figure below shows a sample of one hour’s worth of recorded data.  The y-axis is the voltage drop across the tape, where lower voltage indicates higher water depth.  Note that the resistance is linear in the water depth, but the voltage is not.

One hour of recorded voltage: raw 120 Hz input (green), filtered (red), with detected pump cycles (blue).

The green curve is the raw voltage; I’m not sure whether the noisy behavior is due to the tape sensor or the A/D.  Fortunately the DI-145 samples at 120 Hz (despite all documentation suggesting 240 Hz), so that a simple low-gain filter, shown in red, cleans things up enough to allow equally simple edge detections of the pump turning on, shown in blue.  (And this is after nearly two weeks of no rain.)  Based on those edge detections, the program sends me text messages with periodic updates, or alerts if the water depth gets too high, or the pump starts cycling too frequently, or abruptly stops cycling.

All of the Python code is available at the usual location here.