**Introduction**

A common development approach in MATLAB is to:

- Write MATLAB code until it’s unacceptably slow.
- Replace the slowest code with a C++ implementation called via MATLAB’s MEX interface.
- Goto step 1.

Regression testing the faster MEX implementation against the slower original MATLAB can be difficult. Even a seemingly line-for-line, “direct” translation from MATLAB to C++, when provided with exactly the same numeric inputs, executed on the same machine, with the same default floating-point rounding mode, can still yield drastically different output. How does this happen?

There are three primary causes of such differences, none of them easy to fix. The purpose of this post is just to describe these causes, focusing on one in particular that I learned occurs more frequently than I realized.

**1. The butterfly effect**

This is where the *drastically* different results typically come from. Even if the *inputs* to the MATLAB and MEX implementations are identical, suppose that just one *intermediate* calculation yields even the smallest possible difference in its result… and is followed by a long sequence of *further* calculations using that result. That small initial difference can be greatly magnified in the final output, due to cumulative effects of rounding error. For example:

x = 0.1; for k in 1:100 x = 4 * (1 - x); end % x == 0.37244749676375793 double x = 0.10000000000000002; for (int k = 1; k <= 100; ++k) { x = 4 * (1 - x); } // x == 0.5453779481420313

This example is only contrived in its simplicity, exaggerating the “speed” of divergence with just a few hundred floating-point operations. Consider a more realistic, more complex Monte Carlo simulation of, say, an aircraft autopilot maneuvering in response to simulated sensor inputs. In a *particular* Monte Carlo iteration, the original MATLAB implementation might successfully dampen a severe Dutch roll, while the MEX version might result in a crash (of the aircraft, I mean).

Of course, for this divergence of behavior to occur at all, there must be that *first* difference in the result of an intermediate calculation. So this “butterfly effect” really is just an *effect*— it’s not a cause at all, just a magnified symptom of the two real causes, described below.

**2. Compiler non-determinism**

As far as I know, the MATLAB interpreter is pretty well-behaved and predictable, in the sense that MATLAB source code explicitly specifies the order of arithmetic operations, and the precision with which they are executed. A C++ compiler, on the other hand, has a lot of freedom in how it translates source code into the corresponding sequence of arithmetic operations… and possibly even the precision with which they are executed.

Even if we assume that all operations are carried out in double precision, order of operations can matter; the problem is that this order is not explicitly controlled by the C++ source code being compiled (*edit*: at least for some speed-optimizing compiler settings). For example, when a compiler encounters the statement `double x = a+b+c;`

, it could emit code to effectively calculate

(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3) % this is false

Worse, explicit parentheses in the source code *may* help, but it doesn’t *have* to.

Another possible problem is intermediate precision. For example, in the process of computing

**3. Transcendental functions**

So suppose that we are comparing the results of MATLAB and C++ implementations of the same algorithm. We have verified that the numeric inputs are identical, and we have also somehow verified that the arithmetic operations specified by the algorithm are executed in the same order, all in double precision, in both implementations. Yet the output *still* differs between the two.

Another possible– in fact likely– cause of such differences is in the implementation of transcendental functions such as `sin`

, `cos`

, `atan2`

, `exp`

, etc., which are not required by IEEE-754-2008 to be correctly rounded due to the table maker’s dilemma. For example, following is the first actual instance of this problem that I encountered years ago, reproduced here in MATLAB R2017a (on my Windows laptop):

x = 193513.887169782; y = 44414.97148164646; atan2(y, x) == 0.2256108075334872

while the corresponding C++ implementation (still called from MATLAB, built as a MEX function using Microsoft Visual Studio 2015) yields

#include <cmath> ... std::atan2(y, x) == 0.22561080753348722;

The two results differ by an ulp, with (in this case) the MATLAB version being correctly rounded.

(*Rant*: Note that *both* of the above values, despite actually being different, display as the same 0.225610807533487 in the MATLAB command window, which for some reason neglects to provide “round-trip” representations of its native floating-point data type. See here for a function that I find handy when troubleshooting issues like this.)

What I found surprising, after recently exploring this issue in more detail, is that the above example is not an edge case: the MATLAB and C++ `cmath`

implementations of the trigonometric and exponential functions disagree quite frequently– and furthermore, the above example notwithstanding, the C++ implementation tends to be more accurate more of the time, significantly so in the case of `atan2`

and the exponential functions, as the following figure shows.

The setup: on my Windows laptop, with MATLAB R2017a and Microsoft Visual Studio 2015 (with code compiled from MATLAB as a MEX function with the provided default compiler configuration file), I compared each function’s output for 1 million randomly generated inputs– or pairs of inputs in the case of `atan2`

— in the unit interval.

Of those 1 million inputs, I calculated how many yielded different output from MATLAB and C++, where the differences fell into 3 categories:

- Red indicates that MATLAB produced the correctly rounded result, with the exact value
*between*the MATLAB and C++ outputs (i.e., both implementations had an error less than an ulp). - Gray indicates that C++ produced the correctly rounded result, with both implementations having an error of less than an ulp.
- Blue indicates that C++ produced the correctly rounded result,
*between*the exact value and the MATLAB output (i.e., MATLAB had an error*greater*than an ulp).

(A few additional notes on test setup: first, I used random inputs in the unit interval, instead of evenly-spaced values with domains specific to each function, to ensure testing all of the mantissa bits in the input, while still allowing some variation in the exponent. Second, I also tested addition, subtraction, multiplication, division, and square root, just for completeness, and verified that they agree exactly 100% of the time, as guaranteed by IEEE-754-2008… at least for *one* such operation in isolation, eliminating any compiler non-determinism mentioned earlier. And finally, I also tested against MEX-compiled C++ using MinGW-w64, with results identical to MSVC++.)

For most of these functions, the inputs where MATLAB and C++ differ are distributed across the entire unit interval. The two-argument form of the arctangent is particularly interesting. The following figure “zooms in” on the 16.9597% of the sample inputs that yielded different outputs, showing a scatterplot of those inputs using the same color coding as above. (The red, where MATLAB provides the correctly rounded result, is so infrequent it is barely visible in the summary bar chart.)

**Conclusion**

This might all seem rather grim. It certainly does seem hopeless to expect that a C++ translation of even modestly complex MATLAB code will preserve exact agreement of output for all inputs. (In fact, it’s even worse than this. Even the same MATLAB code, executed in the same version of MATLAB, but on two different machines with different architectures, will not necessarily produce identical results given identical inputs.)

But exact agreement between different implementations is rarely the “real” requirement. Recall the Monte Carlo simulation example described above. If we run, say, 1000 iterations of a simulation in MATLAB, and the same 1000 iterations with a MEX translation, it may be that iteration #137 yields different output in the two versions. But that *should* be okay… as long as the *distribution* over all 1000 outputs is the same– or sufficiently similar– in both cases.