This is my first experiment with blogging. I say “experiment,” because I am curious to see how the subject matter– and intended audience– will evolve. The plan, anyway, is to discuss what I think are interesting ideas, presented in a way that will hopefully add clarity and detail, at least in my own mind. We will see how it goes.
So what sort of ideas do I think are interesting? I expect that no matter what the specific topic, science will be in there somewhere, with a likely focus on mathematics and computing, and just as likely digressions on math and science education. But rather than spend too much time trying to set the stage, let’s start with a specific idea that occurred to me while reading Torkel Franzen’s “Godel’s Theorem: An Incomplete Guide to its Use and Abuse.”
(Don’t worry, Godel’s incompleteness theorems aren’t really the point of this discussion. But artificial intelligence is– Franzen’s book deals with many different “popular” misapplications of the theorems, one of which is an argument that the human mind cannot be simply a “formal system” subject only to mathematical or physical laws, but must involve something more, possibly supernatural or divine, depending on who is arguing. If you are interested, Hofstadter’s “Godel, Escher, Bach: An Eternal Golden Braid” and Penrose’s “The Emperor’s New Mind” are both very interesting and entertaining books about this topic, with mostly opposing viewpoints.)
Anyway, where was I? Oh yeah… there is an interesting quote, attributed to the computer scientist Larry Tesler, saying that “Artificial intelligence is whatever hasn’t been done yet.” That is, we suppose that an “intelligent” computer or machine or robot must be able to perform some task that currently only humans can do effectively. Just a few decades ago, playing board games were some of the first of those tasks that AI researchers tackled… and today computers play some board games very well– chess being the most popular example– and even play some games perfectly, checkers being the most recent “solved” game that I know of.
But now that computers are able to play so many board games so well, playing those games is no longer regarded as an activity that requires or demonstrates “intelligence.” The basic approaches and algorithms used in game-playing are now relatively well-understood, and frequently involve simple brute-force evaluation of large numbers of possible move sequences, which differs from how human players typically think about how they play.
Natural language processing is another example. I remember as a kid playing with the program Eliza that I found in the book “More BASIC Computer Games” (anyone remember this?). The program simulated a mostly incompetent psychiatrist, alternating questions/responses with the user’s text inputs. It used a very crude English language processing algorithm, typically involving extracting whole chunks of the user’s input and regurgitating them back in the form of a question. Crude… but fascinating. Early text adventure games used similar ideas, initially allowing simple verb/object commands such as “Go North” or “Kill dwarf,” but eventually “understanding” more complex sentences as input.
Jaded young video gamers of today would by no means consider these relics to exhibit any sort of intelligence… but that is exactly the point. As soon as the state of the art expands its frontier of capability, those newly-learned tasks are no longer viewed as hallmarks of artificial intelligence, but simply the product of new or more complex– but still pre-programmed– behavior.
This point of view is interesting, since it seems to implicitly assume that we humans have something inherently “more” involved in our thought and behavior than can be simulated by a computer. As soon as we are able to understand, and thus program, a particular complex activity, that activity is no longer sufficient to capture whatever is the essence of the human mind.
Proponents of the creationist “intelligent design” theory share a similar viewpoint. (Bear with me here– this comparison is the motivation of this entire post.) Intelligent design is the idea that life as we currently experience and observe it could not have originated spontaneously and evolved from there, but instead must have been designed and placed here by an intelligent creator. Specifically, the argument of “irreducible complexity” is that there are some aspects of some living organisms that are so complex, yet so “functionally compact” (my own inadequate phrase), that they could not have evolved in stages over time, since any of the stages prior to complete functionality could not have survived natural selection. The eye, with its many components all of which “must” work together for any useful behavior, and the bacterial flagellum are a couple of commonly used examples.
This argument of irreducible complexity has been referred to, usually without much respect, as the “argument from personal incredulity.” That is, intelligent design is the idea that “I can’t understand it, so it must be divine.” It occurred to me that this is essentially the (contrapositive of the) converse of the viewpoint giving rise to the AI effect, namely that humans must have fundamentally (divinely?) more intelligence than computers can possibly have: “I can understand it, so it must not be divine.”