More voting: not unanimous, just involved

Americans are so used to “one person, one vote” that they often imagine this is the only sensible way to vote.  It’s not.  (In fact, we’ll see that it’s about the least sensible way to vote!) – W. Poundstone, “Gaming the Vote: Why Elections Aren’t Fair (and What We Can Do About It)”

Every four years, the United States presidential electoral process gets renewed scrutiny and criticism, usually focused on the arcane goofiness of the electoral college, particularly the non-uniform voting power of people in different states, as well as the possible (and in several cases, actual) difference between the outcomes of the popular and electoral votes, Gore vs. Bush in 2000 being the most recent example.

I think a nationwide popular vote for president makes sense, at least more sense than the current system.  However, it is likely that we are stuck with what we have.  The presidential electoral process is specified in the Constitution; an amendment would require ratification (by legislature or convention) by at least three-fourths, or 38, of the states.  The electoral college provides voters in the smaller states significantly more voting power than in a nationwide popular vote.  There are quite a few more than 13 such “small” states that would oppose such an amendment, Wyoming being the most extreme example, with voters there controlling over three times the per-capita electoral votes compared to those in California.

But with either the electoral vote or a popular vote, problems– and solutions– still remain.  The quote above refers to the problems with the simple plurality voting system in use for almost every single-seat election in the country.  That is, a plurality vote assigns one point per ballot to the corresponding voter’s top-ranked candidate; the candidate with the most points wins the election.

Simple and obvious, right?  Indeed, as Poundstone suggests, people often do not even consider that other voting procedures are possible and perhaps better, particularly when there are three or more candidates in the election.  For example, the anti-plurality vote (discussed in an earlier post) awards one point per ballot to each voter’s bottom-ranked candidate, with the least number of points winning the election.  With just two candidates, these two procedures are equivalent; with three or more candidates, however, these and other procedures can yield different outcomes.  (Note that I am not suggesting the antiplurality vote for the presidential election; this is just an example of a different procedure that can yield different election outcomes.)

So which procedure is best?  A common mathematical approach to finding an optimal “something” is to specify a few “obvious” desirable properties of that something, then figure out what thing or things have those properties.  Economist Kenneth Arrow took that approach, resulting in his now-famous “Impossibility Theorem.”

The theorem is rather simple to state: consider an election involving three (or more) candidates, call them A, B, and C.  Suppose we want to find a voting procedure whose inputs are each voter’s strict transitive ranking of the candidates (e.g., A > B > C), and whose output is a ranking representing the election outcome.  There are several properties that such a procedure should (presumably) have:

  • (U) Unanimity: For any pair of candidates x and y, if all voters prefer x over y, then x > y in the outcome.
  • (IIA) Independence of irrelevant alternatives: For any pair of candidates x and y, the ranking of x and y in the outcome should depend only on the voters’ rankings of x and y (i.e., and not on the ranking of any other candidate).
  • (D) Non-dictatorship: The election outcome should not be a function of a single distinguished voter’s preferences.

Arrow’s Theorem states that there is no voting procedure with these three simple properties.  Furthermore, any voting procedure with properties (U) and (IIA) is a dictatorship!

It turns out that the situation is not quite as grim as it seems.  My point in this post is to refer you to an interesting paper by Donald Saari, “Connecting and Resolving Sen’s and Arrow’s Theorems.”  I think the paper does a good job of explaining why Arrow’s Theorem is not surprising, and how a minor modification of the property (IIA) yields more optimistic results.

Also mentioned in the paper is a somewhat technical generalization of the theorem that I had not seen before (and that motivated this post and its title).  The unanimity condition is stronger than it needs to be; Saari describes a weaker property that he calls “involvement” whose substitution for (U) yields essentially the same result.  The involvement condition (I) requires that there be at least two pairs of candidates x and y such that there is a set of ballots (inputs) yielding the outcome x > y, and another set of ballots yielding y > x.

The problem: I can see how unanimity is a special case of involvement (exercise for the reader: show that this is in fact the case)… but I have not quite wrapped my head around the paper to see how this weaker property still excludes all possible voting procedures.

The Uncanny Valley

Remember those Charles Schwab commercials from the last year or two ago?  They used an animation technique called rotoscoping, where artists trace each individual frame of a live action scene.  I recall finding those commercials annoying, bordering on downright disturbing.

It turns out that this reaction is pretty common, so much so that the phenomenon was given a name that has stuck for the last 40 years: the “uncanny valley,” a phrase coined by roboticist Masahiro Mori.  I recall first reading about this in an article from the December 2008 issue of Scientific American.  More recently, I saw it just last week in The Straight Dope.  (If you haven’t read the Straight Dope, I recommend it; Cecil Adams’ columns not only have some interesting information, but they are usually a pretty fun and amusing read.)

The uncanny valley is simple to describe.  Consider plotting a graph of humans’ emotional connection with a robot versus the extent of similarity of said robot to a human form.  (Wikipedia has a detailed such graph here.)  Such a graph will be increasing, with the exception of a marked “valley” or dip just prior to the point of indistinguishability between a robot and a human.  Put more simply, as robots, or animated images, or whatever, become more and more life-like, we appreciate that similarity more and more… but when a robot or image is so life-like that it looks “almost but not quite human,” we find it off-putting or even disturbing.

Beyond the Chuck Schwab commercials mentioned earlier, I think movie-makers might also find the uncanny valley an important consideration.  I remember Final Fantasy: The Spirits Within being an interesting movie at least from a technological perspective… but it also seemed a little spooky.  The more recent Beowulf had a similar effect, and not just because Angelina Jolie had a tail.  The interesting question, I think, is why?  What causes this reaction in us?  And if it’s not actually human, how close to “human” must a robot be to climb out of the uncanny valley?

Gerrymandering: you know it when you see it?

I find social choice problems very interesting.  Voting, fair division, and today’s subject, gerrymandering, are all good examples.  I think they are a source of a lot of interesting mathematics, in particular mathematics that is very accessible to students at many levels.  Oh yeah, and I suppose politicians and we as voters could also stand to inject some logic into our views on the process of government.

Gerrymandering seems to be rather like pornography: it is difficult to define, but you usually know it when you see it.  North Carolina’s 12th district is one of the more glaring examples, both by the eyeball norm as well as several other more objectively measurable critieria, which we’ll get to shortly.  Two questions arise: first, can we define it?  And second, can we prevent it?

An article in the most recent College Mathematics Journal (Hodge, Marshall, and Patterson, Gerrymandering and Convexity) provides some interesting analysis of the first question.  The authors use convexity of a district as a measure of “extent” of gerrymandering.  Recall that a set S is convex if, for any pair of points x and y in S, the line segment joining x and y is contained entirely within S.  Rather than simply determining whether a district is convex or not, they consider the probability that the segment joining a randomly selected pair of points is contained within the district.  This provides a metric with a range of values corresponding, presumably, to a range of extents of gerrymandering.

There are some practical problems to consider, with interesting and elegant solutions.  (What if the state itself is not “very convex”?  What about population distribution?  How hard is it to compute these probabilities in the first place?)  From a pedagogical standpoint, this seems like a great practical experiment, given the wealth of data that is publicly available, including state and district boundaries, as well as population distributions, all from the U.S. Census.

Now for the second question: can we prevent gerrymandering?  It seems to me there are two possible approaches here.  One is to get the humans out of the loop.  That is, provide a procedure, an algorithm for constructing district boundaries, considering state boundaries and population distributions, but not necessarily political demographics.  One interesting proposal is the so-called “Shortest Split-Line Algorithm.”  The Center for Range Voting has a good site with examples of the results.

The idea is pretty simple: suppose you want to divide up a state into 15 districts.  Split the state’s population in half, using the shortest possible line segment through the state’s area.  These two halves will make up 7 and 8 districts, respectively.  Then perform the same splitting process recursively on each half.

Sounds good, right?  The resulting districts are “very convex.”  The problem is that convexity is not the whole story, being neither necessary nor sufficient for a district to be free from gerrymandering.  Another interesting paper (Humphreys, Can Compactness Constrain the Gerrymander?) provides some interesting examples of results that show just how out of whack things can get even when all districts are convex.

So, is there any way out of this mess?  To leave the clean mathematics behind and jump into the political swamp, it is not clear to me how necessary districting is at all anymore.  We have cleared a lot of technological and logistic hurdles in the last couple of hundred years, to the point that having a single designated representative that is geographically close to a subset of residents is no longer a requirement for efficient communication with or election of those representatives.  Why not use statewide approval voting as a means of electing representatives?

Carcassonne update

Recall the post last month about my computer version of the board game Carcassonne?  During the last couple of weeks I added several new features not available in the board game, including the AI opponent that was my original motivation for the project.  Both the implementation of this opponent, as well as its resulting “personality,” have proved to be rather interesting; it plays much differently than I do, and at times seems to make critical errors… but I am already just 2-2 against it, so it must be doing something right.

You can download the game at the usual location.  It is compiled for Windows, but it should work on any platform supported by FreeGLUT/OpenGL.  Let me know if you have any problems (or success) getting it to work on another platform.

Note that for copyright reasons discussed in last month’s post, I called the game Aude, after the department in France where the city of Carcassonne is located.  I also created my own tile artwork, which was pretty simple to do in Mathematica, although I admit I like the original tiles better.  I cannot distribute the original tiles myself, but if you want to use them, they are available online from the American publisher, and it is a relatively simple matter to turn them into the 128×128 pixel 24-color uncompressed bitmaps required by the game.  (You can even add expansion tiles pretty easily, too.  Let me know if there is interest in doing this.)

This project was a lot of fun, and the result has been fun as well.  Even without the AI opponent, there are several aspects of the 3D board interface that I think make the game more interesting.  One is the “projected score” display, indicating not just the points for completed features as in the board game, but also the total points including farms and incomplete features.  My wife and I have found that this information influences mid- to end-game strategy, reducing the perceived element of chance in the game.

Another feature that I added but generally do not use (it is turned off by default) is a display of the distribution of individual tile types remaining in the deck.  This is interesting information, but it feels a little too much like “counting cards” to me.

Finally, you can undo a move at any time.  This also must be used with some discretion, but after a few frustrating missteps with the laptop touchpad in the middle of a good game, being able to undo a mistakenly placed tile is pretty handy.

The move undoing was obviously critical to the tree search used by the AI opponent.  Carcassonne is an interesting game from an artificial intelligence perspective, for a couple of reasons.  First, it is complex; that is, its branching factor, or typical number of moves available to a player at any time, is pretty large, reaching well over 100 or even 200 as a game progresses, compared with 30-35 for chess.  Also, it involves the chance element of drawing tiles from a shuffled deck.  This doubly complicates matters, because not only does it effectively multiply the branching factor by about 24, but it also makes standard tree pruning techniques like alpha-beta cutoffs more challenging.

There is certainly much experimentation still to be done.  The board evaluation function has a lot of room for improvement, even if just to tweak the weights applied to the components that are already computed.  But first I plan to just enjoy playing the game a bit, to see how well or poorly it really plays.