|The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics / Roger Penrose|
|Reviewed by Tal Cohen||Friday, 12 September 1997|
Penrose is not the first to attack strong AI. Perhaps the most famous attack on the field was by John R. Searle, in his classic article Minds, brains, and programs. In that article, Searle introduced the “Chinese Room” experiment and, in fact, termed the phrase “Strong AI”.
The attacks made by Searle and others were based mainly on two ideas. The first claims that “simple reasoning” shows that a machine mindlessly following instructions cannot possibly have a mind (the Chinese Room argument). The other kind of argument claims that since any formal system is limited (as shown by Gödel’s Incompleteness Theorem), and since the mind is obviously not limited in a similar manner, it follows that the mind is not a formal system (or, in other words, not an algorithm). Of course, both those claims are presented here in a gross oversimplification.
Penrose uses those two arguments strongly in his book, but offers a third direction for the attack on Strong AI: the physical angle. According to Penrose, some of the human mind’s functionality is based on quantum mechanics, in a way that cannot be simulated by any algorithm. And thus, he concludes, no algorithm can have a mind.
When attempting to put forward this argument, Penrose faces a serious problem: “to specialize is to be ignorant”. Few of the laypeople familiar with modern quantum mechanics are fluent in computing theory. On the other hand, of those people who deal with computing fewer still are familiar with quantum mechanics. And from both groups, very few indeed are familiar with biological information and research regarding the human brain itself. Penrose’s claims draw from all three fields, and in order to put forward his argument, he must first attempt to teach the reader enough about all three subjects.
As a result, the ten-chapter book has three chapters dedicated to computer science (mainly issues of provability and complexity, with a discussion of Gödel’s Theorem, Turing Machines, the Halting Problem, etc.). Four other chapters deal with physics, from Newtonian physics and Einstein’s Relativity to a hypothetical theory of “Correct Quantum Gravity” that Penrose suggests for the sake of argument, noting that it is totally unproved. And finally, one chapter deals with physical aspects of the human brain. Only then, in the very last chapter of the book, can Penrose finally present his claims. Sadly, the result is a work that creates a strong momentum, but fails to deliver a blow.
As a computer science student, I found nothing new in Penrose’s chapters regarding my profession. The chapters about physics, however, were most enlightening, though I doubt I could have handled them without prior knowledge. I was wondering if any physicist who read this book could tell me how clear are the chapters regarding computing?
Penrose repeatedly falls into the trap of presenting too-technical issues. In many places, the example that follows many obscure equations is really all that was needed; the technical details could be spared. I think that in a book like this, the physical equations are really pointless. To make matters worse, some of the information is outright aimless. For example, there is an entire chapter trying to prove that the physical world is not time-symmetrical. This issue is still an open debate among physicists, and it contributes very little to the actual arguments regarding AI. A page or two regarding this problem would have sufficed, I believe.
(At the introduction, the author states that he was told “each [mathematical] formula will cut down the general readership by half”. According to this rule of thumb, and assuming that there are about five billion people on Earth right now, the book has a general readership of a lot less than a single person, even if one does not take into account those formulae that appear in footnotes and endnotes.)
When the concluding chapter finally arrives, the reader is certainly much wiser regarding many subjects, and is now ready to face Penrose’s claims regarding AI in general and Strong AI in particular. I, for one, was very disappointed with those arguments. Only one of them has to do directly with quantum mechanics (and that, after reading four chapters on physics). I cannot judge if this argument is indeed correct, though on the face of it, it looks rather weak to me. It is one of the very few original arguments that Penrose presents, and I think he should have elaborated greatly about it.
Other arguments presented in the book were attacked elsewhere, by AI experts. Here is a simple example: Penrose believes that the Halting Problem suggests that the human mind is non-algorithmic. The Halting Problem shows that no algorithm can be created, which will be able to decide if any given algorithm will halt given any specific input. I fail to see why Penrose thinks the human mind is superior to an algorithm in this respect. Take a simple program that stops only when it finds an even number that is not the sum of two primes (the “Goldbach Conjecture”). Can Penrose’s non-algorithmic mind tell if this program will eventually halt? And further, does this prove that the mind is, or is not, algorithmic?
Penrose repeatedly states that he simply fears the idea that human minds are, after all, algorithms, with nothing divine in them except for their flabbergasting complexity. For example, he claims he cannot see how such a complex algorithm could have developed by a process of natural selection. He views simple life forms (insects, etc.) as merely automata, and he cannot tell where (in the evolutionary context) does consciousness appear -- in dogs, or in monkeys, or perhaps only in humans. To me, it seems that complex algorithms are indeed only an evolution of a simple automata, though I cannot tell if there is more to the human mind than algorithms. Penrose’s fear somehow reminds me of the fear many people presented, not too long ago, from the idea that human life evolved from the apes.
And who says that the human mind is a perfect, bug-free algorithm? As a programer, I find it very easy to doubt that the mind is indeed bug-free. It seems to me that it is a rather buggy algorithm indeed -- as the concept of natural selection would suggest, when it comes to algorithms. Any programmer knows that interactive algorithms are most often developed in a similar manner: write the basic algorithm, and then slowly improve it, using what you already have as you go. If some addition to the code proves useful, keep it; if some addition proves hazardous, remove it. In biological terms, this is what “survival of the fittest” is all about.
The most outrageous, and intriguing, suggestion that Penrose presents is that human consciousness does not simply flow forward in time, but rather moves forward and backwards in time, in order to make decisions quickly enough to survive. This insane-sounding claim is even backed up by some (weak) experimental evidence, that could probably be explained better in numerous other ways. But who knows, perhaps there is something to it after all...
The problem of Strong AI will probably remain open for a very long time. In a way, it is similar to the Goldbach Conjecture or Fermat’s Last Theorem: proving that it is impossible would require a very elaborate proof, while proving that it is possible would “simply” require a working article. But I’m afraid that even when presented with a thinking, intelligent, conscious algorithm, some people (Penrose included) will simply refuse to accept it as such.
Find this book using Wikipedia
|Mathematics in Text [by mark] [Unfold this reply]|