## Bresenham’s circles are (always?) optimal

Introduction

In my recent experiment with string art, I needed Bresenham’s line algorithm to rasterize a line, that is, to compute the set of pixels that most closely approximate a straight line segment. Suppose instead that we want to rasterize a circle, as shown in the figure below. Bresenham’s algorithm used to rasterize a circle with radius 10. Integer lattice points are assumed to be at the center of each pixel.

There is a similarly named Bresenham’s circle algorithm to do this, that is arguably even simpler than that for drawing a line (code is available on GitHub):

// Call plot(_, _) to draw pixels on circle at origin with given radius.
{
int y = 0;
int error = 3 - 2 * radius;
while (x >= y)
{
plot(x, y); plot(x, -y); plot(-x, y); plot(-x, -y);
plot(y, x); plot(y, -x); plot(-y, x); plot(-y, -x);
if (error > 0)
{
error -= 4 * (--x);
}
error += 4 * (++y) + 2;
}
}


I have a fond memory of my first encounter with this algorithm as a kid reading Nibble Magazine (“Seven ready-to-type Apple II programs!”). In his article, Brent Iverson  had “discovered a neat little circle-generating algorithm in a graphics textbook” … but neglected to mention which textbook, or the name Bresenham, or any other details about how this mysterious snippet of code (which was in BASIC, and in slightly different but equivalent form) worked, despite using only integer addition and bit shift operations. In 1988, without search engines or internet forums, I was on my own.

Today, there are derivations of this algorithm all over the web. However, every one that I have found glosses over what I think is a beautiful property of the algorithm: its fast integer error function is only an approximation of the actual error function, which seems like it could sacrifice some accuracy in choosing which pixels best approximate the desired circle. But it doesn’t; the motivation for this post is to show that (1) this is an issue worth thinking about, since the algorithm’s error function is not monotonic in actual distance from the circle, but (2) despite this, in practice Bresenham’s circle algorithm is still optimal in the sense of always selecting the next pixel that is indeed closest to the circle.

At least, I think. More on this later.

The algorithm

Let’s start by describing how the algorithm works, as a means of introducing some useful notation. Fixing a positive integer input radius $r$ (the reader can verify that the algorithm also works for $r=0$), we can focus on rendering only those pixels in the first octant $0 \leq y \leq x$, using the eightfold dihedral symmetries of the circle to plot the remaining pixels.

Starting at $(x,y)=(r,0)$, at each iteration we always move up one pixel, and conditionally move (diagonally) left one pixel, choosing from the two corresponding lattice points $(x-1,y+1)$ or $(x,y+1)$ depending on which is closest to the circle. (We always move up, since in the first octant the slope of the tangent to the circle is always at most $-1$.) An iteration of Bresenham’s circle algorithm. We have just plotted the pixel at , and need to choose which of or to plot next, ideally minimizing the actual distance from the circle (shown in red).

Ideally, we would choose the next pixel that minimizes the actual, exact distance from the circle $|c(h,k)|$, where $c(h,k)=\sqrt{h^2+k^2}-r$

as indicated by the red line segments in the above figure. However, to eliminate those unpleasant square roots, Bresenham’s algorithm instead seeks to minimize $|b(h,k)|$, where $b(h,k)=h^2+k^2-r^2=c(h,k)c^*(h,k)$ $c^*(h,k)=\sqrt{h^2+k^2}+r$

(To foreshadow the key point, note that $b(h,k)$ is not simply the square of $c(h,k)$.) In either case, it is convenient to define the sum of these signed errors– that is, without the absolute value– for the two candidate pixels as $e_b(x,y)=b(x-1,y+1)+b(x,y+1)$ $e_c(x,y)=c(x-1,y+1)+c(x,y+1)$

where in the source code above, the error variable maintains the value of $e_b(x,y)$, which is initialized to $e_b(r,0)=3-2r$, and at each iteration, depending on the choice of next pixel to render (more on this shortly), can be updated accordingly with simple integer increment, bit shift, and addition operations as shown.

Choosing the closest pixel

To decide which pixel to move to at each iteration, note that both the integer-valued $b(h,k)$ and the exact $c(h,k)$ have the property that they are positive for points $(h,k)$ outside the circle, and negative for points inside the circle. Thus, for example, in the extreme (and in fact impossible) case that both of the candidate next points $(x-1,y+1)$ and $(x,y+1)$ were outside the circle, then $e_b(x,y)$ and $e_c(x,y)$ are positive, indicating that we should move diagonally left to $(x-1,y+1)$. Similarly, if both candidate points were inside the circle, then the sum of errors is negative, indicating that we should move up to $(x,y+1)$.

The important question is what happens in between, i.e., in the typical situation shown in the above figure, where the two candidate points straddle the circle, so to speak. Again, ideally we would use the sign of the exact sum of errors $e_c(x,y)$, in the same way as described above, moving diagonally left if $e_c(x,y)>0$, or up if $e_c(x,y)<0$. Does Bresenham’s algorithm, using the sign of the approximated integer-valued $e_b(x,y)$ instead, yield the same behavior?

To see why this is a question worth asking, suppose for example that we want to draw a circle with radius $r=4$, and that for some reason we are asked which of the points $(2,2)$ or $(4,3)$ is closer to the circle, as shown in the following figure.

The reader can verify that $(4,3)$ is closer, with $|c(4,3)|<|c(2,2)|$ and thus $c(2,2)+c(4,3)<0$ … but $|b(4,3)|>|b(2,2)|$ and thus $b(2,2)+b(4,3)>0$. That is, the integer error approximation used in Bresenham’s algorithm is not monotonic in the actual distance of points from the circle.

The interesting good news is that this seemingly potential flaw, isn’t. That is, we are now in a position to conjecture the following:

Conjecture: At each iteration of Bresenham’s circle algorithm, the sum-of-error functions $e_c(x,y)$ and $e_b(x,y)$ always have the same sign. That is, the algorithm always makes the correct choice of which of the two candidate next pixels is closest to the circle.

Incomplete proof: There are a few key observations that will be useful here. First, at each iteration, $c(x,y+1)>c(x-1,y+1)$ and $c^*(x,y+1)>c^*(x-1,y+1)>0$. We can see these directly from the definitions, noting that $r \geq x>0$. (This is why we constrained the radius to be positive, treating $r=0$ as a separate special case.)

Also, note that the exact sum of errors $e_c(x,y)$ is always irrational, and thus never zero. That is, we should never be indifferent to the choice of next pixel. The approximated $e_b(x,y)$ is never zero, either, as can be seen by inspecting the source code: the value of error is initialized to an odd number, and is always incremented or decremented by even amounts. Thus, it suffices to show that $e_c(x,y)$ and $e_b(x,y)$ are either both positive, or both negative.

So, there are two cases. First, suppose that $e_c(x,y)>0$. Then $c(x,y+1)>0$, since $2c(x,y+1)>c(x,y+1)+c(x-1,y+1)=e_c(x,y)>0$

Thus, $e_b(x,y)=b(x-1,y+1)+b(x,y+1)$ $= c(x-1,y+1)c^*(x-1,y+1)+c(x,y+1)c^*(x,y+1)$ $> c(x-1,y+1)c^*(x-1,y+1)+c(x,y+1)c^*(x-1,y+1)$ $= e_c(x,y)c^*(x-1,y+1)>0$.

Thus, when we are supposed to move diagonally left, we do.

The second case, showing that $e_c(x,y)<0$ implies $e_b(x,y)<0$, is why this is only conjecture and not a theorem. Following is as far as I have been able to proceed:

Similar to the first case, note that $c(x-1,y+1)<0$, since $2c(x-1,y+1)

Consider again the expanded form of $e_b(x,y)=b(x-1,y+1)+b(x,y+1)$ $= c(x-1,y+1)c^*(x-1,y+1)+c(x,y+1)c^*(x,y+1)$

If $c(x,y+1) \leq 0$, then we can see $e_b(x,y)<0$ directly. Otherwise, if $c(x,y+1)>0$ … and here is where I get stuck.

Conclusion

Well, that was disappointing.

My glass is half full, though; I think there are three possible resolutions of this problem, any of which would be an interesting outcome. First, and most likely, I may have simply missed something obvious. Perhaps a reader can help take the above attempt at a proof to the finish line.

Alternatively, maybe the conjecture is actually false, and there is in fact a counterexample– which given the partial proof above, would necessarily be a situation where Bresenham’s algorithm is supposed to move up ( $e_c(x,y)<0$), but moves diagonally left instead ( $e_b(x,y)>0$). In realistic practice, this isn’t a concern: I have verified numerically that the algorithm works correctly for every radius up to $r=16384$— that is, much larger than any pixel-perfect circle that would fit on the highest resolution screens currently available. But even from a theoretical perspective, if there is a counterexample, it would be interesting that a minimal one must be so surprisingly large.

Or finally, maybe the conjecture is indeed true, but the remainder of the proof of the second “negative” case is actually really hard. There is some intuition behind this possibility as well: in this part of the proof, we are trying to bound a sum of square roots, which is one of my favorite examples of “problems that seem like they have no business being as difficult as they are.” (The Euclidean traveling salesman is perhaps the most recognizable decision problem with this wrinkle: it’s often claimed to be NP-complete, but it’s “only” NP-hard; we don’t even know if it’s in NP, since it involves a similar comparison with a sum of square roots.) This potentially affects the integrity of my brute-force numerical verification above as well: it’s technically possible that I missed a counterexample by performing those thousands of sign checks with “only” 50-digit precision.

Edit 2022-03-21: Thanks to commenters Andrey Khalyavin for the initial proof of the contrapositive of the missing half above, which I didn’t fully grasp, and to Linus Hamilton for patiently following up with a more detailed version, linked here, that helped me to see how it works. Their work shows that Bresenham’s algorithm is indeed optimal, always selecting the unique next pixel that is closest to the desired circle.

Reference:

1. Iverson, Brent, Hi-Res Circle Generator, Nibble Magazine, 9(1) January 1988, 68-71
This entry was posted in Uncategorized. Bookmark the permalink.

### 13 Responses to Bresenham’s circles are (always?) optimal

1. Andrey Khalyavin says:

Since e_b(x, y) is an integer, you only need to show that e_b(x,y) >= 1 implies e_c(x,y) > 0 to complete the proof. This can be done by two squarings. If we replace y + 1 with y, e_b(x,y) >= 1 is equivalent to x * (x – 1) + y^2 >= r^2. If we rewrite e_c(x,y) > 0 as sum of two square roots greater than 2 * r and square the inequality, we will have 2 * (x * (x – 1) + y^2) + 1 in the left hand side with square root of the product multiplied by 2 and we will have 4 * r^2 in the right hand side. So it is enough to prove that square root of the product is greater than r^2. This can be done by another squaring.

• possiblywrong says:

Thanks for taking a look at this! I might need more explicit details between the steps that you describe, because I am not sure that I follow. Isn’t this begging the question? That is, that last step, showing the lower bound on the square root (technically, I think it suffices to show that it is at least $r^2-1/2$), seems at least as hard as the original bound.

• Linus Hamilton says:

I think Andrey’s proof works, and I wrote it up with a bit more detail here: https://i.stack.imgur.com/iWSTc.png
The insight that e_b(x,y) is always an odd integer is necessary, since I think the conjecture is not always true when you try to draw a circle with a non-integer radius.

2. Wilmington says:

In “Computer Graphics: Principles and Practice” 2nd ed (Foley, vanDam, Feiner, Hughs) p 83ff, they take a slightly different approach. They assume that the next pixel will be a choice between two adjacent pixels. Rather than computing the distance of each candidate pixel to the circle, they focus on the midpoint between the two candidate pixels. Then, they say that F(x, y) = x^2 + y^2 -R^2 is positive outside the circle, negative inside and evaluate the midpoint with it.

But, I guess your question is, how do we know that choosing based on the midpoint is equivalent to actually calculating the distance to the center of each candidate pixel for two ADJACENT pixels?

• possiblywrong says:

Right. This is an intriguing idea: it’s not hard to show that the “midpoint” algorithm is equivalent to the one discussed here, in the sense that both algorithms always produce exactly the same set of pixels. And so it would indeed suffice to show that the midpoint is outside the circle iff the (diagonally) left pixel is closest to it.

3. Wilmington says:

Another approach would be to start with e[b] and factor out $c^*(x, y+1)$.

e[b] = (sqrt(x^2 + (y+1)^2) + r) ( (sqrt((x-1)^2 + (y+1)^2) – r) (f) – sqrt(x^2 + (y+1)^2) – r)
where f = the ratio of $c^*(x-1, y+1)$ to $c^*(x, y+1)$ = (sqrt((x-1)^2 + (y+1)^2) + r) / ( sqrt((x)^2 + (y+1)^2) + r).

This representation of e[b] looks almost identical to e[c] with the addition of the factor “f”.
The initial factor in e[b] above is always positive and won’t affect whether the expression is positive or negative.
The factor f is always positive, and provided $x > 1$ will be a positive number less than 1.
Since we are considering 1st octant points, x ranges from r down to ceil(r/sqrt(2))-1 (or something like that, not sure of the exact bound). This range won’t include 1 unless r=1, in which case there really aren’t any Bresenham steps.

So, when e[c] is positive, we can see that this factored e[b] will also be positive (as you said).
When e[c] is negative, then we can see that this factored e[b] will be negative unless the amount by which the inner error is diminished, that is (1-f) (c(x-1, y+1), exceeds the amount of “slack” on the outer error, that is the difference between c(x-1, y+1) – c(x, y+1).

For me, this also leads to the “stuck” state..

4. Mikko says:

Proof sketch: let b1=b(x-1,y+1), b2=b(x, y+1), and define c1, c2, c1* and c2* analogously.
Also denote h1=hypot(x-1,y+1), h2=hypot(x,y+1).

We want to prove b1+b2 > 0 => c1 + c2 > 0. The case 0 < b1 < b2 is easy: it follows from sign(b)==sign(c).
Assume b1 < 0 < b2 and b1 + b2 > 0. Then there are positive integers n and k such that
b1 = -n
b2 = n + 2k – 1.
Subtracting these gives b2 – b1 = 2x-1 = 2n + 2k-1 => n = x – k, i.e.
b1 = k – x
b2 = k + x – 1

c1 + c2
= b1/c1* + b2/c2*
= (b1c2* + b2c1) / c1c2*
~ b1c2* + b2c1*
= (k-x)c2* + (k+x-1)c1*
= x(c1-c2) + k(c1+c2) – c1*

= x(c1-c2) + (c1+c2) – c1* # since k>=1 and c1+c2>0
= x(c1-c2) + c2*
=x(h1-h2) + h2 + r
=xh1 + (1-x)h2 + r

d/dx (xh1 + (1-x)h2 + r) = h1 + x(x-1)/h1 – h2 + (1-x)x/h2
= h1-h2 + x(x-1)(1/h1 – 1/h2)
= (h2-h1)(-1 + x(x-1)/h1h2)
= 0
<=> (x(x-1))^2 = ((x-1)^2 + (y+1)^2)(x^2 + (y+1)^2)
<=> (y+1)^2((x-1)^2 + x^2 + (y+1)^2) = 0
=> y = -1.

Similarly d/dy = 0 => (y+1)(x(h2-h1) + h1)/h1h2 = 0 => y = -1.

I.e. the global minimum r is attained on the line y=-1, and c1 + c2 is positive everywhere.

5. Wilmington says:

The circle intersects the segment between (x-1, y+1) and (x, y+1) somewhere. Call it x-d, where d is in [0,1].

We can write r^2 = (x-d)^2 + (y+1)^2.

In e[b] replace r^2 with (x-d)^2 + (y+1)^2, simplify, and factor out positive constants. When I do this, I get that e[b] = (2d-1)x – d^2.

In e[c], do a similar replacement to write the expressions under the radical as r^2 + (2x(d-1)) + 1 – d^2, and r^2 + 2xd -d^2. Factor out r^2 from under the radicals leaving 1 + expressions / r^2. Divide by the positive constant r since it doesn’t affect our indicator’s sign and we have two radicals -2.

Now apply Newton’s extension to the binomial theorem. The -2 is offset by two +1’s and we’re left with e[c] = (2d-1)x -d^2 + 1/2 -1/(8r^2)[((2d-1)x + 1-d^2)^2 – (2dx-d^2)^2) + higher order terms.

At this point we see that e[b] and e[c] differ by 1/2 -1/(8r^2)[((2d-1)x + 1-d^2)^2 – (2dx-d^2)^2) + higher order terms.

Does this seem correct?
Can anyone see when this would be enough to change the sign of the indicator?
One more question – how do you write match in wordpress?

6. Wilmington says:

I think I botched the algebra some in my previous description. I made another try at it and posted it here: https://imgur.com/a/KtqkxBW

If I did it right the second time, the difference between e[c] and e[b] is d^2 + the higher order binomial expansion terms (multiplied by 2r^2.

7. Wilmington says:

Note that the imgur image [ https://imgur.com/a/KtqkxBW ] has two pages. Scroll down to see second. (Sorry for multiple comments, I don’t think you can edit here.)

• possiblywrong says:

Thanks for your help and work on this– I’ll need to take a closer look to understand this better.

Regarding comment formatting, you can put inline LaTeX into comments using syntax described here.

• Wilmington says:

I’m thinking that the conjecture is false, but finding a counter-example might be ticklish.

Since “Geometry is the art of correct reasoning about incorrectly drawn figures”*, please find an incorrect figure here: https://imgur.com/a/rbGY2cM

r sub i is the radius of the circle that goes through (x-1, y+1).
r sub o is the radius of the circle that goes through (x, y+1).

delta sub i is the distance from the inner radius to the (exact) circle that we are trying to approximate.
delta sub o i is the distance from the outer radius to the (exact) circle that we are trying to approximate.

We are considering the situation where the exact error e sub c is negative. That is, the point should go straight up and not diagonally up. In this case, delta sub i is larger than delta sub o.
Let it delta sub i be larger than delta sub o by a positive constant epsilon.

After rotating the line to (x, y+1) onto (x-1, y+1), the diagram shows constituent pieces of e sub b(x-1, y+1) and e sub b(x, y+1). e sub b (x-1, y+1) is the piece outlined in orange. It is the radius of the exact circle r squared, less the radius of the circle passing through (x-1, y+1), squared.

Similarly, the e sub b (x, y+1) is the part in yellow and is the different of the outer radius squared and the exact radius squared.

You could do this algebraically, but the diagram shows components in the orange shape that are equal to components in the yellow shape with double lines. (I moved one set of double lines on delta squared to pick the more symmetric component, but my pen doesn’t erase.)

After offsetting the matching areas, the outer has a net of 2 (delta sub o)^2 and the inner has a net of 2 (r sub i) + epison^2.

If the net area in the outer is greater than the net area in the inner, then Bresenham’s algorithm will say that the error is positive and therefore the point should go to (x-1, y+1). This would be incorrect for our scenario where we postulated that the inner error is larger than the outer error by the positive amount epsilon.

That is when is 2 * (delta sub o)^2 > 2 * (r sub i) * epsilon + epsilon^2 ?

For this to happen, I think first that the inner and outer error must be very close. It should be almost 1/2 of the distance between sqrt(x^2 + (y+1)^2) – sqrt((x-1)^2 + (y+1)^2). But, for a counter-example the exact circle must be slightly closer to the inner circle through (x-1, y+1).

My feeling is that the conjecture is false, but like you said, it may take a large radius to find it.

*George Polya

8. possiblywrong says:

@Linus, thanks for the helpful and very clear write-up! I have edited the post to directly link to your proof, crediting both you and Andrey, who as you point out had the right idea, but thanks for taking the time to make it clearer and simpler for me to digest!

This site uses Akismet to reduce spam. Learn how your comment data is processed.