**Introduction**

Given two vectors in three dimensions, what is the most accurate way to compute the angle between them? I have seen several different approaches to this problem recently in the wild, and although I knew some of them had potential issues, I wasn’t sure just how bad things might get in practice, nor which alternative was best as a replacement.

To make the setup more precise, let’s assume that we are given two non-zero input vectors , represented *exactly* by their double-precision coordinates, and we desire a function that returns a double-precision value that most closely approximates the angle between the vectors, with all intermediate computation also done in double precision.

**Kahan’s Mangled Angles**

William Kahan discusses three formulas in the “Mangled Angles” section of the paper linked below. The first is the “usual” dot product formula:

with the following C++ implementation, which as Kahan points out requires clamping the double-precision dot product to the interval to avoid a NaN result for some vectors that are nearly parallel:

double angle(const Vector& u, const Vector& v) { return std::acos(std::min(1.0, std::max(-1.0, dot(u, v) / (norm(u) * norm(v))))); }

Kahan subsequently describes another formula using the cross product:

with the following implementation:

double angle(const Vector& u, const Vector& v) { double angle = std::asin(std::min(1.0, norm(cross(u, v)) / (norm(u) * norm(v)))); if (dot(u, v) < 0) { angle = 3.141592653589793 - angle; } return angle; }

Interestingly, Kahan does *not* mention that this formula *also* requires clamping the `asin`

argument to the interval ; following is an explicit example of inputs demonstrating the potential problem:

Finally, despite referring to the above formula as “the best known in three dimensions,” Kahan finishes with the following “better formula less well known than it deserves”:

with the following implementation:

double angle(const Vector& u, const Vector& v) { double nu = norm(u); double nv = norm(v); return 2 * std::atan2(norm(nv * u - nu * v), norm(nv * u + nu * v)); }

That’s a lot of square roots. I didn’t focus on performance here, but it would be an interesting follow-on analysis to compare the speed of each of these formulas.

Other approaches are possible; following is the formula that I *thought* was the most accurate, before reading Kahan’s paper:

double angle(const Vector& u, const Vector& v) { return std::atan2(norm(cross(u, v)), dot(u, v)); }

This has the added benefit of involving just a single square root. This is the formula that I used to compute the “true” angle between vectors to compare errors, using arbitrary-precision rational arithmetic to compute the square root (actually, a reciprocal square root, which is slightly easier) and arctangent. All of the source code is available on GitHub.

And finally, the approach that I saw most recently that motivated this post, using the Law of cosines:

double angle(const Vector& u, const Vector& v) { double u2 = u.x * u.x + u.y * u.y + u.z * u.z; double v2 = v.x * v.x + v.y * v.y + v.z * v.z; Vector d = u - v; return std::acos(std::min(1.0, std::max(-1.0, (u2 + v2 - (d.x * d.x + d.y * d.y + d.z * d.z)) / (2 * std::sqrt(u2) * std::sqrt(v2))))); }

**Results**

The relative accuracy of each formula depends on the magnitude of the angle between the input vectors. The following figure shows this comparison for angles near 0 (i.e., nearly parallel vectors), near , near (i.e., nearly orthogonal vectors), and near (i.e., nearly “anti-parallel” vectors).

The *x*-axis indicates an offset from the “true” angle between the input vectors, computed to roughly 200-bit accuracy. The *y*-axis indicates the error in the double-precision output, compared against the true angle *also rounded to double-precision*. The points hugging the bottom of each figure are my poor man’s attempt at indicating *zero* error (note that these are on a log-log scale), i.e., the 64-bit double-precision output matched the corresponding value rounded from the 200-bit true angle. (In many ways this figure feels like a failure of visual display of quantitative information, and I’m not sure how best to improve it.)

So what’s the takeaway? If you don’t care about absolute errors smaller than a few dozen nanoradians, then it doesn’t really matter which formula you use. And if you *do* care about errors– and angles– smaller than that, then be sure that your *inputs* are accurate in the first place. For example, did you normalize your “real” input vectors to unit length first, and if so, how much error did you unintentionally incur as a result? We can construct very small angles between vectors if we restrict to “nice” two-dimensional inputs like and . But it’s an interesting exercise to see how difficult it is to construct vectors “in general position” (e.g., randomly rotated) with a prescribed small angle between them.

As expected, the two arccosine formulas behave poorly for nearly parallel/anti-parallel vectors, and as Kahan describes, the arcsine formula behaves poorly for nearly orthogonal vectors. The two arctangent formulas are the most consistently accurate, and when the one mentioned by Kahan *is* better, it’s typically *much* better.

**Reference:**

- Kahan, W., How Futile are Mindless Assessments of Roundoff in Floating-Point Computation? [PDF]