Science and Faith: Discovery or Invention?

(If it is not already apparent from the short history of posts here, I expect to be content to alternate with little gear-shifting between probing philosophical questions and recreational mathematics.  I hope your reading mood is fickle.)

Are science and faith compatible?  Can a scientist pursue truth, be comfortable with doubt, and at the same time believe in God?  I think these are interesting, important, and personal questions.  Because they are personal, I don’t think my answers to them should be particularly important to you.  But hopefully the discussion will at least be interesting.

Before discussing these questions directly, though, I want to take what may at first seem like a rather abrupt detour.  Please be patient: this detour is at least mildly interesting in its own right, but does also have direct bearing on the main question.

Is mathematics discovered or invented?  (Remember, be patient.)  That is, when a mathematician describes some new theorem, or new algorithm, or new connection between previously unrelated areas of mathematics, has he or she invented something new “out of thin air,” or discovered something that was “there” all along and simply had yet to be found?

I tend toward the Platonist idea of discovery.  This seems particularly apparent when considering the many historical examples of two or more people, often greatly separated geographically, arriving nearly simultaneously at the same or similar results.

(I focus on mathematics here only briefly, mainly because I am a mathematician, and I think in that particular field the question is actually a bit more interesting, since mathematical truth is frequently so abstract.  But I think the discussion follows through at least as well in other scientific fields, particularly physics.)

To provide a visual analogy to this idea of discovery of pre-existing truths and connections between them, imagine that our world, be it mathematical, physical, or whatever, is one large, very dark room.  In that room is a great engine, consisting of countless parts, gears, rods, etc., all of which move together with humming smoothness.  Each part of the engine corresponds to some mathematical truth or physical law governing how our universe works, and those parts are connected and interact with each other in fascinating ways.

It is a wonderful machine… but the room is completely dark, and we can’t see how it works.  However, each of us is equipped with a flashlight.  Most of those flashlights, mine included, are relatively dim, and a very few others are extremely bright.  Sometimes we know where to shine our lights from what others have learned, and other times we simply get lucky and look in the right place.

(I do not recall when or how this particular description of this idea occurred to me.  But I certainly see in it at least hints of Plato’s Allegory of the Cave, and Newton’s “smoother pebbles” and “shinier shells” on the shore of the “undiscovered great ocean of truth,” images that I remember being fascinated with when I first read them.)

So, to work my way back to my main point… my motivation for this whole discussion is to relate the “exploring a dark room” analogy to the issue of compatibility between science and faith.  As I see it, science is our ongoing attempt to understand how that great engine works.  We currently have a very limited view; some parts of the engine seem relatively easy to see, other parts we have pretty good ideas about how they work and how they are connected… and other parts are in near total darkness.

To me, faith is an expression of belief about that part of the world that we cannot yet see or understand clearly.  Faith is to science what conjecture is to theorems.  More precisely, faith deals with those ideas about which we cannot yet make useful testable predictions.  In this respect, I see science and faith as perfectly compatible… but somewhat vacuously so, since their domains are mutually exclusive.  And as long as those domains remain disjoint, I think we are free to believe whatever we like.

But every once in a while, someone shines a light so bright that we are able to significantly expand the frontier between what is lit and what is dark.  Or perhaps someone illuminates an area that we thought we understood pretty well, but by viewing it from a direction not previously considered, we see it more clearly for what it is.  Historical examples abound, from Ptolemy to Galileo to Newton to Einstein.

This latter situation is critical, since to me it lies at the heart of a scientific view of the world.  We can today make statements of varying degrees of confidence about how the world works… but about none of those statements can we be 100% certain.  We must always qualify our understanding as being possibly wrong (!), and be prepared to revise that understanding should someone shine a brighter light from a more illuminating direction.

Pente from the Apple // to today

This is in part a continuation of last week’s discussion about artificial intelligence.  But mostly it is an excuse to wax nostalgic about some of my early experiences with computers.

I recently made some final tweaks to a program I wrote to play the board game Pente.  You can download it here.  Instructions, source code, and executables for Windows are included, but it should compile on any platform supported by FreeGLUT/OpenGL.  I really enjoyed writing it, and I hope you may get some enjoyment out of playing it.

This project was motivated by several factors.  The idea most recently recurred to me during a classroom discussion about computer programming.  We had been exploring simple graphics, drawing grids representing various game boards as an example.  That led to a discussion of artificial intelligence: not just playing a board game on the computer, but having the computer act as one or even both players.  In response to this discussion, I put together a simple example of a Java applet that would play Tic Tac Toe against a human opponent.  This involved implementing some very generic game-playing AI framework, including move generation and ordering, board evaluation, negamax search with alpha-beta pruning, etc.

Meanwhile, my wife and I enjoy playing board games, and I had recently introduced her to Pente.  I had the classic red roll-up tube version of the game when I was a kid, and I loved its combination of simplicity of rules yet complexity of play and strategy.  Playing the game again after so many years, I thought it would be fun to try to implement a computer opponent that could play reasonably well– at least well enough to beat me, which shouldn’t be difficult, since I am definitely a novice player.

Finally, I wasn’t starting completely from scratch: this was not a new idea, but one that had been brewing for nearly 25 years, ever since I read the 1986 February issue of Nibble magazine (“The Reference for Apple Computing”).  There was an article by James R. Geschwender about his program that he called Quintic, which allowed any combination of two human or computer players to play Pente.  But the computer opponent was particularly intriguing in two respects.  First, it did not involve any tree search at all, but was essentially just a single-ply lookahead, evaluating all possible moves according to a weighted sum of table lookup values based on the pattern of empty/black/white stones in a line surrounding each possible move.  The result was an AI opponent that didn’t play terribly well, but was very fast, particularly on the 1 MHz Apple //.

Second– and this was the feature that really fascinated me as a kid– the lookup table that defined the move evaluation function was not fixed.  There was an accompanying “coaching” program that let you tweak the values corresponding to different patterns on the board.  But even more interesting was the ability of a computer player to “learn” on its own as it played, modifying its own lookup table by comparing its predictions with your actual moves, and making adjustments accordingly if it lost.

My program merely extends this basic idea of a simple board evaluation algorithm, with the complexity “hidden” in the pre-computed values of a lookup table.  It uses straightforward fixed-depth negamax tree search with alpha-beta pruning.  No transposition tables, no opening book; nothing really sophisticated at all.  But the board evaluation function is at once detailed yet simple to implement… and thus relatively fast.  It is detailed in that it uses a lookup table indexed by line patterns that extend four points in both directions from a candidate move (Quintic had two similar but separate and much smaller tables, one for “center moves” and one for “end moves”).  It is simple in that there is no game-specific knowledge at all implemented in the board evaluation algorithm itself; everything from captures to open threes to potential wins, etc., are represented solely by appropriately scaled values in the lookup table.

What was really interesting, and frankly surprising, was that my first draft of lookup table values resulted in a computer opponent that plays pretty well.  It holds its own against other AI opponents with the same search depth, including Mark Mammel’s excellent WPente.  More importantly, I cannot consistently beat the computer even with only 4-ply looakahead.  So by “plays pretty well” I suppose I mean that it at least regularly beats me… which was the whole point of this project.

Hello World

This is my first experiment with blogging.  I say “experiment,” because I am curious to see how the subject matter– and intended audience– will evolve.  The plan, anyway, is to discuss what I think are interesting ideas, presented in a way that will hopefully add clarity and detail, at least in my own mind.  We will see how it goes.

So what sort of ideas do I think are interesting?  I expect that no matter what the specific topic, science will be in there somewhere, with a likely focus on mathematics and computing, and just as likely digressions on math and science education.  But rather than spend too much time trying to set the stage, let’s start with a specific idea that occurred to me while reading Torkel Franzen’s “Godel’s Theorem: An Incomplete Guide to its Use and Abuse.”

(Don’t worry, Godel’s incompleteness theorems aren’t really the point of this discussion.  But artificial intelligence is– Franzen’s book deals with many different “popular” misapplications of the theorems, one of which is an argument that the human mind cannot be simply a “formal system” subject only to mathematical or physical laws, but must involve something more, possibly supernatural or divine, depending on who is arguing.  If you are interested, Hofstadter’s “Godel, Escher, Bach: An Eternal Golden Braid” and Penrose’s “The Emperor’s New Mind” are both very interesting and entertaining books about this topic, with mostly opposing viewpoints.)

Anyway, where was I?  Oh yeah… there is an interesting quote, attributed to the computer scientist Larry Tesler, saying that “Artificial intelligence is whatever hasn’t been done yet.”  That is, we suppose that an “intelligent” computer or machine or robot must be able to perform some task that currently only humans can do effectively.  Just a few decades ago, playing board games were some of the first of those tasks that AI researchers tackled… and today computers play some board games very well– chess being the most popular example– and even play some games perfectly, checkers being the most recent “solved” game that I know of.

But now that computers are able to play so many board games so well, playing those games is no longer regarded as an activity that requires or demonstrates “intelligence.”  The basic approaches and algorithms used in game-playing are now relatively well-understood, and frequently involve simple brute-force evaluation of large numbers of possible move sequences, which differs from how human players typically think about how they play.

Natural language processing is another example.  I remember as a kid playing with the program Eliza that I found in the book “More BASIC Computer Games” (anyone remember this?).  The program simulated a mostly incompetent psychiatrist, alternating questions/responses with the user’s text inputs.  It used a very crude English language processing algorithm, typically involving extracting whole chunks of the user’s input and regurgitating them back in the form of a question.  Crude… but fascinating.  Early text adventure games used similar ideas, initially allowing simple verb/object commands such as “Go North” or “Kill dwarf,” but eventually “understanding” more complex sentences as input.

Jaded young video gamers of today would by no means consider these relics to exhibit any sort of intelligence… but that is exactly the point.  As soon as the state of the art expands its frontier of capability, those newly-learned tasks are no longer viewed as hallmarks of artificial intelligence, but simply the product of new or more complex– but still pre-programmed– behavior.

This point of view is interesting, since it seems to implicitly assume that we humans have something inherently “more” involved in our thought and behavior than can be simulated by a computer.  As soon as we are able to understand, and thus program, a particular complex activity, that activity is no longer sufficient to capture whatever is the essence of the human mind.

Proponents of the creationist “intelligent design” theory share a similar viewpoint.  (Bear with me here– this comparison is the motivation of this entire post.)  Intelligent design is the idea that life as we currently experience and observe it could not have originated spontaneously and evolved from there, but instead must have been designed and placed here by an intelligent creator.  Specifically, the argument of “irreducible complexity” is that there are some aspects of some living organisms that are so complex, yet so “functionally compact” (my own inadequate phrase), that they could not have evolved in stages over time, since any of the stages prior to complete functionality could not have survived natural selection.  The eye, with its many components all of which “must” work together for any useful behavior, and the bacterial flagellum are a couple of commonly used examples.

This argument of irreducible complexity has been referred to, usually without much respect, as the “argument from personal incredulity.”  That is, intelligent design is the idea that “I can’t understand it, so it must be divine.”  It occurred to me that this is essentially the (contrapositive of the) converse of the viewpoint giving rise to the AI effect, namely that humans must have fundamentally (divinely?) more intelligence than computers can possibly have: “I can understand it, so it must not be divine.”