What is (-1&3)?

This is just nostalgic amusement.  I recently encountered the following while poking around in some code that I had written a disturbingly long time ago:

switch (-1&3) {
    case 1: ...
    case 2: ...
    case 3: ...
...
}

What does this code do?  This is interesting because the switch expression is a constant that could be evaluated at compile time (indeed, this could just as well have been implemented with a series of #if/#elseif preprocessor directives instead of a switch-case statement).

As usual, it seems more fun to present this as a puzzle, rather than just point and say, “This is what I did.”  For context, or possibly as a hint, this code was part of a task involving parsing and analyzing digital terrain elevation data (DTED), where it makes at least some sense.

This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to What is (-1&3)?

  1. I suspect that your example has been carefully chosen to demonstrate this point: due to this being a bitwise operation of signed integers, the behavior of this code depends entirely on the integer representation of the host architecture. With optimization enabled, the compiler will easily eliminate the branch, but we can’t know for sure which branch is selected unless we’re told more about the host.

    On a two’s complement architecture, case 3 is always evaluated. The -1 is represented as all bits set (111..111), so the expression evaluates to 3.

    On a one’s complement architecture, case 2 is always evaluated. The -1 is represented as all bits high except the lowest bit (111…110), so the expression evaluates to 2.

    On a signed magnitude architecture, case 1 is always evaluated. The -1 is represented as the lowest and highest bits set (100…001), so the expression evaluates to 1.

    Since you mentioned preprocessor directives, was this code used to detect the host’s signed integer representation at compile time? Digging a bit, I see that DTED uses signed magnitude to encode integers (crazy!), so I’m guessing each branch had architecture-specific code for decoding DTED integers.

    • Right. In hindsight, this was more complicated than it needed to be, since no matter what the host representation is, conversion from a “raw” signed magnitude to a host integer is achieved by: if (x < 0) x = -(x & 0x7fff);.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s