Floating-point agreement between MATLAB and C++

Introduction

A common development approach in MATLAB is to:

  1. Write MATLAB code until it’s unacceptably slow.
  2. Replace the slowest code with a C++ implementation called via MATLAB’s MEX interface.
  3. Goto step 1.

Regression testing the faster MEX implementation against the slower original MATLAB can be difficult. Even a seemingly line-for-line, “direct” translation from MATLAB to C++, when provided with exactly the same numeric inputs, executed on the same machine, with the same default floating-point rounding mode, can still yield drastically different output. How does this happen?

There are three primary causes of such differences, none of them easy to fix. The purpose of this post is just to describe these causes, focusing on one in particular that I learned occurs more frequently than I realized.

1. The butterfly effect

This is where the drastically different results typically come from. Even if the inputs to the MATLAB and MEX implementations are identical, suppose that just one intermediate calculation yields even the smallest possible difference in its result… and is followed by a long sequence of further calculations using that result. That small initial difference can be greatly magnified in the final output, due to cumulative effects of rounding error. For example:

x = 0.1;
for k in 1:100
    x = 4 * (1 - x);
end
% x == 0.37244749676375793

double x = 0.10000000000000002;
for (int k = 1; k <= 100; ++k) {
    x = 4 * (1 - x);
}
// x == 0.5453779481420313

This example is only contrived in its simplicity, exaggerating the “speed” of divergence with just a few hundred floating-point operations. Consider a more realistic, more complex Monte Carlo simulation of, say, an aircraft autopilot maneuvering in response to simulated sensor inputs. In a particular Monte Carlo iteration, the original MATLAB implementation might successfully dampen a severe Dutch roll, while the MEX version might result in a crash (of the aircraft, I mean).

Of course, for this divergence of behavior to occur at all, there must be that first difference in the result of an intermediate calculation. So this “butterfly effect” really is just an effect— it’s not a cause at all, just a magnified symptom of the two real causes, described below.

2. Compiler non-determinism

As far as I know, the MATLAB interpreter is pretty well-behaved and predictable, in the sense that MATLAB source code explicitly specifies the order of arithmetic operations, and the precision with which they are executed. A C++ compiler, on the other hand, has a lot of freedom in how it translates source code into the corresponding sequence of arithmetic operations… and possibly even the precision with which they are executed.

Even if we assume that all operations are carried out in double precision, order of operations can matter; the problem is that this order is not explicitly controlled by the C++ source code being compiled (edit: at least for some speed-optimizing compiler settings). For example, when a compiler encounters the statement double x = a+b+c;, it could emit code to effectively calculate (a+b)+c, or a+(b+c), which do not necessarily produce the same result. That is, double-precision addition is not associative:

(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3) % this is false

Worse, explicit parentheses in the source code may help, but it doesn’t have to.

Another possible problem is intermediate precision. For example, in the process of computing (a+b)+c, the intermediate result t=(a+b) might be computed in, say, 80-bit precision, before clamping the final sum to 64-bit double precision. This has bitten me in other ways discussed here before; Bruce Dawson has several interesting articles with much more detail on this and other issues with floating-point arithmetic.

3. Transcendental functions

So suppose that we are comparing the results of MATLAB and C++ implementations of the same algorithm. We have verified that the numeric inputs are identical, and we have also somehow verified that the arithmetic operations specified by the algorithm are executed in the same order, all in double precision, in both implementations. Yet the output still differs between the two.

Another possible– in fact likely– cause of such differences is in the implementation of transcendental functions such as sin, cos, atan2, exp, etc., which are not required by IEEE-754-2008 to be correctly rounded due to the table maker’s dilemma. For example, following is the first actual instance of this problem that I encountered years ago, reproduced here in MATLAB R2017a (on my Windows laptop):

x = 193513.887169782;
y = 44414.97148164646;
atan2(y, x) == 0.2256108075334872

while the corresponding C++ implementation (still called from MATLAB, built as a MEX function using Microsoft Visual Studio 2015) yields

#include <cmath>
...
std::atan2(y, x) == 0.22561080753348722;

The two results differ by an ulp, with (in this case) the MATLAB version being correctly rounded.

(Rant: Note that both of the above values, despite actually being different, display as the same 0.225610807533487 in the MATLAB command window, which for some reason neglects to provide “round-trip” representations of its native floating-point data type. See here for a function that I find handy when troubleshooting issues like this.)

What I found surprising, after recently exploring this issue in more detail, is that the above example is not an edge case: the MATLAB and C++ cmath implementations of the trigonometric and exponential functions disagree quite frequently– and furthermore, the above example notwithstanding, the C++ implementation tends to be more accurate more of the time, significantly so in the case of atan2 and the exponential functions, as the following figure shows.

Probability of MATLAB/C++ differences in function evaluation for input randomly selected from the unit interval.

The setup: on my Windows laptop, with MATLAB R2017a and Microsoft Visual Studio 2015 (with code compiled from MATLAB as a MEX function with the provided default compiler configuration file), I compared each function’s output for 1 million randomly generated inputs– or pairs of inputs in the case of atan2— in the unit interval.

Of those 1 million inputs, I calculated how many yielded different output from MATLAB and C++, where the differences fell into 3 categories:

  • Red indicates that MATLAB produced the correctly rounded result, with the exact value between the MATLAB and C++ outputs (i.e., both implementations had an error less than an ulp).
  • Gray indicates that C++ produced the correctly rounded result, with both implementations having an error of less than an ulp.
  • Blue indicates that C++ produced the correctly rounded result, between the exact value and the MATLAB output (i.e., MATLAB had an error greater than an ulp).

(A few additional notes on test setup: first, I used random inputs in the unit interval, instead of evenly-spaced values with domains specific to each function, to ensure testing all of the mantissa bits in the input, while still allowing some variation in the exponent. Second, I also tested addition, subtraction, multiplication, division, and square root, just for completeness, and verified that they agree exactly 100% of the time, as guaranteed by IEEE-754-2008… at least for one such operation in isolation, eliminating any compiler non-determinism mentioned earlier. And finally, I also tested against MEX-compiled C++ using MinGW-w64, with results identical to MSVC++.)

For most of these functions, the inputs where MATLAB and C++ differ are distributed across the entire unit interval. The two-argument form of the arctangent is particularly interesting. The following figure “zooms in” on the 16.9597% of the sample inputs that yielded different outputs, showing a scatterplot of those inputs using the same color coding as above. (The red, where MATLAB provides the correctly rounded result, is so infrequent it is barely visible in the summary bar chart.)

Scatterplot of points where MATLAB/C++ differ in evaluation of atan2(y,x), using the same color coding as above.

Conclusion

This might all seem rather grim. It certainly does seem hopeless to expect that a C++ translation of even modestly complex MATLAB code will preserve exact agreement of output for all inputs. (In fact, it’s even worse than this. Even the same MATLAB code, executed in the same version of MATLAB, but on two different machines with different architectures, will not necessarily produce identical results given identical inputs.)

But exact agreement between different implementations is rarely the “real” requirement. Recall the Monte Carlo simulation example described above. If we run, say, 1000 iterations of a simulation in MATLAB, and the same 1000 iterations with a MEX translation, it may be that iteration #137 yields different output in the two versions. But that should be okay… as long as the distribution over all 1000 outputs is the same– or sufficiently similar– in both cases.

Posted in Uncategorized | 3 Comments

What is (-1&3)?

This is just nostalgic amusement.  I recently encountered the following while poking around in some code that I had written a disturbingly long time ago:

switch (-1&3) {
    case 1: ...
    case 2: ...
    case 3: ...
...
}

What does this code do?  This is interesting because the switch expression is a constant that could be evaluated at compile time (indeed, this could just as well have been implemented with a series of #if/#elseif preprocessor directives instead of a switch-case statement).

As usual, it seems more fun to present this as a puzzle, rather than just point and say, “This is what I did.”  For context, or possibly as a hint, this code was part of a task involving parsing and analyzing digital terrain elevation data (DTED), where it makes at least some sense.

Posted in Uncategorized | 2 Comments

The following are equivalent

I have been reviewing Rosen’s Discrete Mathematics and Its Applications textbook for a course this fall, and I noticed an interesting potential pitfall for students in the first chapter on logic and proofs.

Many theorems in mathematics are of the form, “p if and only if q,” where p and q are logical propositions that may be true or false.  For example:

Theorem 1: An integer m is even if and only if m+2 is even.

where in this case p is “m is even” and q is “m+2 is even.”  The statement of the theorem may itself be viewed as a proposition p \leftrightarrow q, where the logical connective \leftrightarrow is read “if and only if,” and behaves like Boolean equality.  Intuitively, p \leftrightarrow q states that “p and q are (materially) equivalent; they have the same truth value, either both true or both false.”

(Think Boolean expressions in your favorite programming language; for example, the proposition p \land q, read “p and q,” looks like p && q in C++, assuming that p and q are of type bool.  Similarly, the proposition p \leftrightarrow q looks like p == q in C++.)

Now consider extending this idea to the equivalence of more than just two propositions.  For example:

Theorem 2: Let m be an integer.  Then the following are equivalent:

  1. m is even.
  2. m+2 is even.
  3. m-2 is even.

The idea is that the three propositions above (let’s call them p_1, p_2, p_3) always have the same truth value; either all three are true, or all three are false.

So far, so good.  The problem arises when Rosen expresses this general idea of equivalence of multiple propositions p_1, p_2, \ldots, p_n as

p_1 \leftrightarrow p_2 \leftrightarrow \ldots \leftrightarrow p_n

Puzzle: What does this expression mean?  A first concern might be that we need parentheses to eliminate any ambiguity.  But almost unfortunately, it can be shown that the \leftrightarrow connective is associative, meaning that this is a perfectly well-formed propositional formula even without parentheses.  The problem is that it doesn’t mean what it looks like it means.

Reference:

  • Rosen, K. H. (2011). Discrete Mathematics and Its Applications (7th ed.). New York, NY: McGraw-Hill. ISBN-13: 978-0073383095
Posted in Uncategorized | 4 Comments

Dice puzzle

I recently encountered the following interesting problem:

Suppose that I put 6 identical dice in a cup, and roll them simultaneously (as in Farkle, for example).  Then you take those same 6 dice, and roll them all again.  What is the probability that we both observe the same outcome?  For example, we may both roll one of each possible value (1-2-3-4-5-6, but not necessarily in order), or we may both roll three 3s and three 6s, etc.

I like this problem as an “extra” for combinatorics students learning about generating functions.  A numeric solution likely requires some programming (but I’ve been wrong about that here before), but implementation is not overly complex, while being slightly beyond the “usual” type of homework problem in the construction of its solution.

Posted in Uncategorized | 2 Comments

A number concatenation problem

Introduction

Consider the following problem: given a finite set of positive integers, arrange them so that the concatenation of their decimal representations is as small as possible.  For example, given the numbers {1, 10, 12}, the arrangement (10, 1, 12) yields the minimum possible value 10112.

I saw a variant of this problem in a recent Reddit post, where it was presented as an “easy” programming challenge, referring in turn to a blog post by Santiago Valdarrama describing it as one of “Five programming problems every software engineer should be able to solve in 1 hour.”

I think the problem is interesting in that it seems simple and intuitive– and indeed it does have a solution with a relatively straightforward implementation– but there are also several “intuitive” approaches that don’t work… and even for the correct implementation, there is some subtlety involved in proving that it really works.

Brute force

First, the following Python 3 implementation simply tries all possible arrangements, returning the lexicographically smallest:

import itertools

def min_cat(num_strs):
    return min(''.join(s) for s in itertools.permutations(num_strs))

(Aside: for convenience in the following discussion, all inputs are assumed to be a list of strings of decimal representations of positive integers, rather than the integers themselves.  This lends some brevity to the code, without adding to or detracting from the complexity of the algorithms.)

This implementation works because every concatenated arrangement has the same length, so a lexicographic comparison is equivalent to comparing the corresponding numeric values.  It’s unacceptably inefficient, though, since we have to consider all n! possible arrangements of n inputs.

Sorting

We can do better, by sorting the inputs in non-decreasing order, and concatenating the result.  But this is where the problem gets tricky: what order relation should we use?

We can’t just use the natural ordering on the integers; using the same earlier example, the sorted arrangement (1, 10, 12) yields 11012, which is larger than the minimum 10112.  Similarly, the sorted arrangement (2, 11) yields 211, which is larger than the minimum 112.

We can’t use the natural lexicographic ordering on strings, either; the initial example (1, 10, 12) fails again here.

The complexity arises because the numbers in a given input set may have different lengths, i.e. numbers of digits.  If all of the numbers were guaranteed to have the same number of digits, then the numeric and lexicographic orderings are the same, and both yield the correct solution.  Several users in the Reddit thread, and even Valdarrama, propose “padding” each input in various ways before sorting to address this, but this is also tricky to get right.  For example, how should the inputs {12, 121} be padded so that a natural lexicographic ordering yields the correct minimum value 12112?

There is a way to do this, which I’ll leave as an exercise for the reader.  Instead, consider the following solution (still Python 3):

import functools

def cmp(x, y):
    return int(x + y) - int(y + x)

def min_cat(num_strs):
    return ''.join(sorted(num_strs, key=functools.cmp_to_key(cmp)))

There are several interesting things going on here.  First, a Python-specific wrinkle: we need to specify the order relation \prec by which to sort.  This actually would have looked slightly simpler in the older Python 2.7, where you could specify a binary comparison function directly.  In Python 3, you can only provide a unary key function to apply to each element in the list, and sort by that.  It’s an interesting exercise in itself to work out how to “convert” a comparison function into the corresponding key function; here we lean on the built-in functools.cmp_to_key to do it for us.  (This idea of specifying an order relation by a natural comparison without a corresponding natural key has been discussed here before, in the context of Reddit’s comment ranking algorithm.)

Second, recall that the input num_strs is a list of strings, not integers, so in the implementation of the comparison cmp(x, y) , the arguments are strings, and the addition operators are concatenation.  The comparison function returns a negative value if the concatenation xy, interpreted as an integer, is less than yx, zero if they are equal, or a positive value if xy is greater than yx.  The intended effect is to sort according to the relation x \prec y defined as xy < yx.

It works… but should it?

This implementation has a nice intuitive justification: suppose that the entire input list contained just two strings x and y.  Then the comparison function effectively realizes the “brute force” evaluation of the two possible arrangements xy and yx.

However, that same intuitive reasoning becomes dangerous as soon as we consider input lists with more than two elements.  That comparison function should bother us, for several reasons:

First, it’s not obvious that the resulting sorted ordering is even well-defined.  That is, is the order relation \prec a strict weak ordering of the set of (decimal string representations of) positive integers?  It certainly isn’t a total ordering, since distinct values can compare as “equal:” for example, consider (1, 11), or (123, 123123), etc.

Second, even assuming the comparison function does realize a strict weak ordering (we’ll prove this shortly), that ordering has some interesting properties.  For example, unlike the natural ordering on the positive integers, there is no smallest element.  That is, for any x, we can always find another strictly lesser y \prec x (as a simple example, note that x0 \prec x, e.g., 1230 \prec 123).  Also unlike the natural ordering on the positive integers, this ordering is dense; given any pair x \prec y, we can always find a third value z in between, i.e., x \prec z \prec y.

Finally, and perhaps most disturbingly, observe that a swap-based sorting algorithm will not necessarily make “monotonic” progress toward the solution: swapping elements that are “out of order” in terms of the comparison function may not always improve the overall situation.  For example, consider the partially-sorted list (12, 345, 1), whose concatenation yields 123451.  The comparison function indicates that 12 and 1 are “out of order” (121>112), but swapping them makes things worse: the concatenation of (1, 345, 12) yields the larger value 134512.

Proof of correctness

Given all of this perhaps non-intuitive weirdness, it seems worth being more rigorous in proving that the above implementation actually does work.  We do this in two steps:

Theorem 1: The relation \prec defined by the comparison function cmp is a strict weak ordering.

Proof: Irreflexivity follows from the definition.  To show transitivity, let x, y, z be positive integers with a, b, c digits, respectively, with x \prec y and y \prec z.  Then

10^b x+y < 10^a y+x and 10^c y+z < 10^b z+y

Thus,

x(10^c-1) < x \frac{z}{y}(10^b-1) < z(10^a-1)

10^c x-x < 10^a z-z

10^c x+z < 10^a z+x

i.e., x \prec z.  Incomparability of x and y corresponds to xy=yx; this is an equivalence relation, with reflexivity and symmetry following from the definition, and transitivity shown exactly as above (with equality in place of inequality).

Theorem 2: Concatenating positive integers sorted by \prec yields the minimum value among all possible arrangements.

Proof: Let x_1 x_2 \ldots x_n be the concatenation of an arrangement of positive integers with minimum value, and suppose that it is not ordered by \prec, i.e., x_i \succ x_{i+1} for some 1 \leq i < n.  Then the concatenation x_1 x_2 \ldots x_{i+1} x_i \ldots x_n is strictly smaller, a contradiction.

(Note that this argument is only “simple” because x_i and x_{i+1} are adjacent.  As mentioned above, swapping non-adjacent elements that are out of order may not in general decrease the overall value.)

Posted in Uncategorized | 4 Comments

Cutting crown molding

This post captures my notes on how to determine the miter and bevel angles for cutting crown molding with a compound miter saw.  There are plenty of web sites with tables of these angles, and even formulas for calculating them, but I thought it would be useful to be more explicit about sign conventions, orientation of the finished cut pieces, etc., as well as to provide a slightly different version of the formulas that doesn’t involve a discontinuity right in the middle of the region of interest, as typically seems to be the case.

Instructions

Let s be the measure of the spring angle, i.e., the angle made by the flat back side of the crown molding with the wall (typically 38 or 45 degrees).  Let w be the measure of the wall angle (e.g., 90 degrees for an inside corner, 270 degrees for an outside corner, etc.).

To cut the piece on the left-hand wall (facing the corner), set the bevel angle b and miter angle m to

b = \arcsin(-\cos\frac{w}{2}\cos s)

m = \arcsin(-\tan b \tan s)

where positive angles are to the right (i.e., positive miter angle is counter-clockwise).  Cut with the ceiling contact edge against the fence, and the finished piece on the left side of the blade.

To cut the piece on the right-hand wall (facing the corner), reverse the miter angle,

m' = -m = \arcsin(\tan b \tan s)

and cut with the wall contact edge against the fence, and the finished piece still on the left side of the blade.

Derivation

Let’s start by focusing on the crown molding piece on the left-hand wall as we face the corner.  Consider a coordinate frame with the ceiling corner at the origin, the positive x-axis running along the crown molding to be cut, the negative z-axis running down to the floor, and the y-axis completing the right-handed frame, as shown in the figure below.  In this example of an inside 90-degree corner, the positive y-axis runs along the opposite wall.

Cutting crown molding for left-hand wall. Example shows an inside corner (w=90 degrees).

The desired axis of rotation of the saw blade is normal to the triangular cross section at the corner, which may be computed as the cross product of unit vectors from the origin to the vertices of this cross section:

\mathbf{u} = (0, 0, -1) \times (\cos\frac{w}{2}, \sin\frac{w}{2}, 0)

To cut with the back of the crown molding flat on the saw table (the xz-plane), with the ceiling contact edge against the fence (the xy-plane), rotate this vector by angle s about the x-axis:

\mathbf{v} = \left(\begin{array}{ccc}1&0&0\\0&\cos s&-\sin s\\0&\sin s&\cos s\end{array}\right) \mathbf{u}

It remains to compute the bevel and miter rotations that transform the axis of rotation of the saw blade from its initial (1,0,0) to \mathbf{v}.  With the finished piece on the left side of the blade, the bevel is a rotation by angle b about the z-axis, followed by the miter rotation by angle m about the y-axis:

\left(\begin{array}{ccc}\cos m&0&\sin m\\0&1&0\\-\sin m&0&\cos m\end{array}\right) \left(\begin{array}{ccc}\cos b&-\sin b&0\\ \sin b&\cos b&0\\0&0&1\end{array}\right) \left(\begin{array}{c}1\\0\\0\end{array}\right) = \mathbf{v}

Solving yields the bevel and miter angles above.  For the crown molding piece on the right-hand wall, we can simply change the sign of both s and w, assuming that the wall contact edge is against the fence (still with the finished piece on the left side of the blade).  The result is no change to the bevel angle, and a sign change in the miter angle.

Posted in Uncategorized | Leave a comment

Iterative Poohsticks

The game of Poohsticks was invented by Milne and appears in one of his books, where Pooh and Christopher Robin each simultaneously drop a stick into a river from the upstream side of a bridge.  The winner is the one whose stick emerges first from the downstream side.  Sounds like fun, right?

Right.  Now consider the following iterative version of the game: as each player’s stick emerges from under the bridge, he retrieves it, then runs back to the upstream side of the bridge and drops the stick again.  Both players continue in this way, until one player’s stick “laps” the other, having emerged from under the bridge for the n-th time before the other player’s stick has emerged n-1 times.

Let’s make this more precise by modeling the game as a discrete event simulation with positive integer parameters (a, b, c).  Both players start at time 0 by simultaneously dropping their sticks, each of which emerges from under the bridge an integer number of seconds later, independently and uniformly distributed between a and b (inclusive).  The river is random, but the players are otherwise evenly matched: each player then takes a constant c seconds to recover his stick from the water, run back to the upstream side of the bridge, and drop the stick again.

Suppose, for example, that (a,b,c)=(10,30,5).  If the game ends at the instant the winner’s stick emerges from under the bridge having first lapped the other player’s stick, then what is the expected time t(a,b,c) to complete a game of Iterative Poohsticks?

I think this is a great problem.  As is often the case here, it’s not only an interesting mathematical problem to calculate the exact expected number of seconds to complete the game, but in addition, this game can even be tricky to simulate correctly, as a means of approximating the solution.  The potential trickiness stems from an ambiguity in the description of how the game ends: what happens if the leading player’s stick emerges from under the bridge for the n-th time, at exactly the same time that the trailing player’s stick emerges for the n-1-st time?

There are two possibilities.  Under version A of the rules, the game continues, so that the leading player’s stick must emerge strictly before the trailing player’s stick.  Under version B of the rules, the game ends there, so that the leading player’s stick need only emerge as or before the trailing player’s stick emerges in order to win.

(It’s interesting to consider which of these versions of the game is easier to simulate and/or analyze.  I think version B admits a slightly cleaner exact solution, although my simulation of the game switches more easily between the two versions.  For reference, the expected time to complete the game with the above parameters is about 309.911 seconds for version A, and about 290.014 seconds for version B.)

Posted in Uncategorized | 3 Comments