An Interview with Douglas R. Hofstadter, following ''I am a Strange Loop'' | |||||
Reviewed by Tal Cohen | Wednesday, 11 June 2008 | ||||
The interview below was conducted in September 2007 and was originally published, in Hebrew, in the online culture magazine Haayal Hakore. The interview was conducted by Tal Cohen and Yarden Nir-Buchbinder. The first part of I am a Strange Loop reads like a condensed version of GEB, by explaining the idea of consciousness as a strange loop. However, unlike in GEB, you do not discuss AI in this book. Are you disappointed with the way cognitive/AI research has advanced in the past three decades? Did you, like many other researchers of the time, believe that intelligent machines are “just around the corner”? And if so, do you still believe it will happen, eventually? I certainly did not believe intelligent machines were just around the corner when I wrote GEB. [Chapter 19] of GEB makes that very clear indeed. Am I disappointed by the amount of progress in cognitive science and AI in the past 30 years or so? Not at all. To the contrary, I would have been extremely upset if we had come anywhere close to reaching human intelligence — it would have made me fear that our minds and souls were not deep. Reaching the goal of AI in just a few decades would have made me dramatically lose respect for humanity, and I certainly don’t want (and never wanted) that to happen. I am a deep admirer of humanity at its finest and deepest and most powerful — of great people such as Helen Keller, Albert Einstein, Ella Fitzgerald, Albert Schweitzer, Frederic Chopin, Raoul Wallenberg, Fats Waller, and on and on. I find endless depth in such people (many more are listed on [chapter 17] of I Am a Strange Loop), and I would hate to think that all that beauty and profundity and goodness could be captured — even approximated in any way at all! — in the horribly rigid computational devices of our era. Do I still believe it will happen someday? I can’t say for sure, but I suppose it will eventually, yes. I wouldn’t want to be around then, though. Such a world would be too alien for me. I prefer living in a world where computers are still very very stupid. And I get a huge kick out of laughing at the hilariously unpredictable inflexibility of the computer models of mental processes that my doctoral students and I co-design. It helps remind me of the immense subtlety and elusiveness of the human mind. Indeed, I am very glad that we still have a very very long ways to go in our quest for AI. I think of this seemingly “pessimistic” view of mine as being in fact a profound kind of optimism, whereas the seemingly “optimistic” visions of Ray Kurzweil and others strike me as actually being a deeply pessimistic view of the nature of the human mind. (I say all this much more poetically on p. 522 of Le Ton beau de Marot, by the way.) We’ll return to Kurzweil soon. You use the word “soul”, rather than consciousness. While you clearly qualified the term to remove any religious connotations, avoiding such connotations is not really possible; “soul” is a very loaded symbol in this respect. Why did you choose to use it, and not, for example, “mind” or “consciousness” or any of several other, less-loaded alternatives? I used the word “soul” because, out of all the various words that one might use — “consciousness”, “intentionality”, “mind”, and so forth — it is the one that I think most evocatively suggests the deep mystery of first-person existence that any philosophically inclined person must wonder about many times during their life. But I think that the first-person pronoun “I” is just as evocative a word for the same thing. I could also have used the word “spirit”, I guess, but that, too, would have seemed loaded with religious flavor to many readers. The point is, whenever one talks about what life is, from the inside, one gets very close to what religion itself is all about. It therefore shouldn’t be too big a surprise that I appropriated a religion-flavored word to talk about a deep mystery that is so close to the very core of religion. One of the most surprising arguments in the book (it has in fact appeared in his previous book, Le Ton beau de Marot) is the idea that the soul outlives the body by having its copies, or “soul-shards”, exist in many brains — the brains of other people, who have known the deceased; perhaps a stronger variation of the idea that a person lives so long as others remember him. You present a compelling argument for the notion of a soul surviving its physical body by being spread across multiple brains; the more a person is familiar to others, the better his soul is “present” in their brain, too. How will you respond to the claim that the “presence” of one soul in another soul’s brain is merely a simulation mechanism, developed by the evolution process as a means to improve survival? (Being able to predict what members of your clan are about to do can certainly be a powerful survival tool.) My argument in I Am a Strange Loop is spelled out clearly. If a person’s soul is truly a pattern, then it can be realized in different media. Wherever that pattern exists in a sufficiently fine-grained way, then it is, by my definition, the soul itself and not some kind of “mere simulation” of it. “Mere simulation” is a phrase that sounds suspiciously like John Searle when he is contemptuously deriding AI in his usual flippant fashion. However, as I see it, there is no black-and-white dividing line between “mere simulations” of a complex entity and full realizations of it — there are just lots and lots of shades of gray all along the way. This spectrum is pointed out in many places in my books, including the three marvelous short stories by Stanislaw Lem included in The Mind’s I. We’ll get back to Lem’s books, too... If indeed a soul can survive by being present in other brains, then certain souls survive for many centuries after their primary brain is gone. You provide an example with Frederic Chopin, saying that “Chopin, the actual person, survives so much in our world, even today”. A common saying is that authors are “immortalized” by their books. Your notion of soul-shards casts a whole new light on this idea, suggesting that authors (and artists, such as musicians) truly extend the survival of their soul by making their works well-known. In this sense, it seems like your own books make a huge effort to familiarize the readers with your inner life, the making of your soul; for example, where most authors would have written “My fabric was greatly influenced by many people, from Niels Bohr to Charlie Brown”, you provide a detailed list of over 45 names. The book (and your previous book, Le Ton beau de Marot) is peppered with scores of anecdotes about your personal life. As a reader, I certainly feel like I know you much better than I know most other authors, certainly authors of non-fiction works. Can this be viewed as an attempt to entrench shards of your soul in the minds of your readers? Is this your shot at immortality? I am not shooting at immortality through my books, no. Nor do I think Chopin was shooting at immortality through his music. That strikes me as a very selfish goal, and I don’t think Chopin was particularly selfish. I would also say that I think that music comes much closer to capturing the essence of a composer’s soul than do a writer’s ideas capture the writer’s soul. Perhaps some very emotional ideas that I express in my books can get across a bit of the essence of my soul to some readers, but I think that Chopin’s music probably does a lot better job (and the same holds, of course, for many composers). I personally don’t have any thoughts about “shooting for immortality” when I write. I try to write simply in order to get ideas out there that I believe in and find fascinating, because I’d like to let other people be able share those ideas. But intellectual ideas alone, no matter how fascinating they are, are not enough to transmit a soul across brains. Perhaps, as I say, my autobiographical passages — at least some of them — get tiny shards of my soul across to some people. But such autobiographical story-telling is not nearly as effective a means of soul-transmission as is living with someone you love for many years of your lives, and sharing profound life goals with them — that’s for sure! Scientist and inventor Ray Kurzweil presents a different take at immortality, a more physical one. Like you, Kurzweil views the soul as “software” that can be executed on different “hardware”. He further believes that in a relatively short while, we will have electronic hardware which is the equivalent of the human brain (which you eloquently characterize as a “universal machine”, capable as “executing” any “soul software”). Once such hardware is available, Kurzweil believes immortality would have been reached: by “downloading” our soul-software onto electronic brains (“Giant Electronic Brains”?), we will become immortals, able to create backups of our souls to be restored in case of disaster, and able to shift our physical location anywhere in the speed of a software download. Do you share Kurzweil’s view of hardware being able to execute human soul software within the foreseeable future? Do you agree with his view of this being the equivalent of immortality — will the software running on the electronic brain be the same “I”? I think Ray Kurzweil is terrified by his own mortality and deeply longs to avoid death. I understand this obsession of his and am even somehow touched by its ferocious intensity, but I think it badly distorts his vision. As I see it, Kurzweil’s desperate hopes seriously cloud his scientific objectivity. I think Kurzweil sees technology as progressing so deterministically fast (Moore’s Law, etc.) that inevitably, within a few decades, hardware will be so fast and nanotechnology so advanced that things unbelievable to us now will be easily doable. A key element in this whole vision is that no one will need to understand the mind or brain in order to copy a particular human’s mind with perfect accuracy, because trillions of tiny “nanobots” will swarm through the bloodstream in the human brain and will report back all the “wiring details” of that particular brain, which at that point constitute a very complex table of data that can be fed into a universal computer program that executes neuron-firings, and presto — that individual’s mind has been reinstantiated in an electronic medium. (This vision is quite reminiscent of the scenario painted in my piece “A Conversation with Einstein’s Brain” toward the end of The Mind’s I, actually, with the only difference being that there is no computer processing anything — it’s all done in the pages of a huge book, with a human being playing the role of the processor.)
Rather ironically, this vision totally bypasses the need for cognitive science or AI, because all one needs is the detailed wiring plan of a brain and then it’s a piece of cake to copy the brain in other media. And thus, says Kurzweil, we will have achieved immortal souls that live on (and potentially forever) in superfast computational hardware — and Kurzweil sees this happening so soon that he is banking on his own brain being thus “uploaded” into superfast hardware and hence he expects (or at least he loudly proclaims that he expects) to become literally immortal — and not in the way Chopin is quasi-immortal, with just little shards of his soul remaining, but with his whole soul preserved forever. Well, the problem is that a soul by itself would go crazy; it has to live in a vastly complex world, and it has to cohabit that world with many other souls, commingling with them just as we do here on earth. To be sure, Kurzweil sees those things as no problem, either — we’ll have virtual worlds galore, “up there” in Cyberheaven, and of course there will be souls by the barrelful all running on the same hardware. And Kurzweil sees the new software souls as intermingling in all sorts of unanticipated and unimaginable ways. Well, to me, this “glorious” new world would be the end of humanity as we know it. If such a vision comes to pass, it certainly would spell the end of human life. Once again, I don’t want to be there if such a vision should ever come to pass. But I doubt that it will come to pass for a very long time. How long? I just don’t know. Centuries, at least. But I don’t know. I’m not a futurologist in the least. But Kurzweil is far more “optimistic” (i.e., depressingly pessimistic, from my perspective) about the pace at which all these world-shaking changes will take place. In any case, the vision that Kurzweil offers (and other very smart people offer it too, such as Hans Moravec, Vernor Vinge, perhaps Marvin Minsky, and many others — usually people who strike me as being overgrown teen-age sci-fi addicts, I have to say) is repugnant to me. On the surface it may sound very idealistic and utopian, but deep down I find it extremely selfish and greedy. “Me, me, me!” is how it sounds to me — “I want to live forever!” But who knows? I don’t even like thinking about this nutty technology-glorifying scenario, now usually called “The Singularity” (also called by some “The Rapture of the Nerds” — a great phrase!) — it just gives me the creeps. Sorry! Another surprising argument in the book is the almost Zennish claim that the soul is nothing but an illusion; an illusion that exists because it hallucinates itself. Perhaps we can say that Hofstadter’s answer to the psychophysical problem is simply, “There is no problem, because there is no psycho. We just imagine it” — and the fact we imagine it is what leads, in fact, to the psycho’s existence. This is the heart of the strange loop that lends its name to the book. The illusion stems from the brain’s need to create an internal representation of its surrounding (for survival purposes), and part of this representation necessarily includes the brain itself, the “I”, a mechanism that represents itself and thereby leads to its very existence. Clearly, not every brain represents itself with equal level of detail; animals, for example, represent the world in their brain in an inferior manner, and hence their self-representation — including their “I”, or their “soul” — is inferior; perhaps one could say that they hallucinate less... (Before you criticize the idea, please bear in mind that this brief outline probably does it a disservice; you’ll have to read the book to fully understand this theory, even if you disagree.) For the sake of discussion, Hofstadter came up with a “scale” for measuring the “soul level” of different brains: the Huneker scale. It is named after James Huneker, a music critic who said, in the early 1900’s, that “Small-souled men, no matter how agile their fingers,” should not attempt playing a certain piano piece by Chopin. You claim that an “I” is nothing but a myth, a hallucination perceived by a hallucination. Certainly, from this point of view, one can assume that there is nothing sacred about souls. Yet you seem to hold souls as sacred, being, for example, a vegetarian. Aren’t those views somewhat conflicting? If a soul is not real, and is nothing but the high-level result of bio-chemical processes, why care about the survival of other souls (and in particular “low-huneker” souls)? I can’t explain this completely rationally. Sure, my brain falls for the universal myth of the “I” — the great hallucination, if you prefer — just as powerfully as does any other human brain. And this hallucination inevitably gives rise to compassion and empathy (yes, merely empathy for other hallucinations, if you will, but that’s just how it is). At some point, in any case, my compassion for other “beings” led me very naturally to finding it unacceptable to destroy other sentient beings (or other hallucinations, if you prefer), such as cows and pigs and lambs and fish and chickens, in order to consume their flesh, even if I knew that their (hallucinated) sentience wasn’t quite as high as the (hallucinated) sentience of human beings. Where or on what basis to draw the line? How many hunekers merit respect? I didn’t know exactly. I decided once to draw the line between mammals and the rest of the animal world, and I stayed with that decision for about twenty years. Recently, however — just a couple of years ago, while I was writing I Am a Strange Loop, and thus being forced (by myself) to think all these issues through very intensely once again — I “lowered” my personal line, and I stopped eating animals of any sort or “size”. I feel more at ease with myself this way, although I do suspect, at times, that I may have gone a little too far. But I’d rather give a too-large tip to a server than a too-small one, and this is analogous. I’d rather err on the side of generosity than on the other side, so I’m vegetarian. (However, I don’t worry about the souls of tomatoes, as I point out in Chapter 1 of I Am a Strange Loop.) Near the end of the book, you discuss performances of Bach’s music. Are there any modern performers that you esteem in particular? How about variations on Bach? Playing Bach in jazz-style is very common, for example. I don’t pay all that much attention to who is performing classical music, because for me most top-notch performers sound very similar to each other. There are of course subtle differences between great performers, but what counts most for me is the composer’s sequence of notes and harmonies, and that’s always there, just about perfectly. Small variations on how the notes and harmonies are produced can have small effects, but that’s all. Classical music is about the profound meanings put there by the composer, not about subtle tweaks on it brought out by the performer. At least that’s how I see it. You know how much I love analogies — well, here’s one. To me, hearing lots of great performers perform the same piece (and I once had just such an experience — I was at a music festival in Aspen, Colorado where a whole bunch of stunningly talented young pianists performed the very beautiful first movement of the Schumann piano concerto one after the other, and I listened for a couple of hours, and it was fascinating) is like watching the greatest slalom-skiers in the world on television as they compete against each other in the Olympics. They’re all unbelievably great at what they do, and they all come down the very long hill within a second or two of each other, sometimes separated by only tenths or hundredths of a second! The differences between them are almost microscopic, in the end. They are all beautiful to watch but I can’t tell one from the other. If somebody told me that they were all the same person wearing different-color uniforms, I wouldn’t shout “Impossible!” Would you? Mind you, this takes nothing away from such people’s enormous talent, but it just says that many different gifted people can carry out nearly indistinguishable marvelous feats that I myself could never aspire to do at all. What they do is hugely impressive, but the differences between the performances of these hugely gifted and hard-working individuals are not all that great. And as in skiing, so in performance of classical music. Incidentally, to be very clear on this, I would say it’s a totally different ballgame when it comes to jazz and popular music, because in those cases liberties galore are taken by all performers — inserting one’s own personality is the name of the game! “The very same song” as sung by Ella Fitzgerald and Frank Sinatra can be unbelievably different (and of course this vast difference is also due in large part to the musicians in the bands accompanying the singers, as well as to the vastly different arrangements). In jazz and popular music, who is playing or singing counts enormously much for me. It makes all the difference in the world. But in classical music, who is performing counts for far far less, as long as we are dealing with a highly talented performer. I have to add that literary translation is a lot more like jazz or pop music than it is like performance of classical music, because once again, the personality of the translator becomes an intrinsic and central part of the “voice” that is speaking. That is inevitable, and it is why translation is such a deep and important art. Translation is another subject in which Hofstadter finds great interest, ever since he helped translate GEB to French. Translation is the key subject of Le Ton beau de Marot, and Hofstadter returns to it in this interview’s final question. In fact, he uses this question to tie everything together: When presenting questions about the soul, I am a Strange Loop (like several of the articles in The Mind’s I) draws on science-fiction themes, such as teleportation. Yet the list of influential people in your life includes no SF authors, and the only SF character in that list is Captain Nemo. Do you read science fiction at all? What are your thoughts on this genre? When I was around ten or twelve, I liked science fiction (I still vividly recall some aspects of the novel Red Planet by Robert Heinlein) and found it very stimulating, but after a while I grew tired of it. Nonetheless, I do admire a few science-fiction authors, such as Robert L. Forward (who was a professional physicist — he’s the author of Dragon’s Egg, to which I devote nearly a whole chapter in Le Ton beau de Marot) and Stanislaw Lem (author of The Cyberiad, among many other books, from which two wonderful short pieces in The Mind’s I are taken — and his separate piece Non Serviam is also included in The Mind’s I). But even Lem, despite all his scientific subtlety and philosophical insight, occasionally troubles me (I remember I once read some long novel by him that I couldn’t stand!). The problem is that too many liberties are taken and one can’t really believe in the scenarios any more. I guess I am pretty old-fashioned in my literary tastes. I like a powerful, believable novel, such as The Kite Runner or A Thousand Splendid Suns, by Khaled Hosseini, which both take place in Afghanistan, and of course The Catcher in the Rye by J.D. Salinger, and Vikram Seth’s wonderful novel-in-verse The Golden Gate. That last novel, by the way, was inspired by Pushkin’s spectacular novel-in-verse Eugene Onegin. I was in fact led to Eugene Onegin by meeting Seth for a coffee some twenty years ago in a Palo Alto bookstore and learning on that occasion about how he had been deeply inspired by Charles Johnston’s English rendition of EO, and shortly thereafter I eagerly gobbled down several English translations of EO (including Johnston’s, of course), in the process falling deeply in love with James Falen’s intoxicatingly beautiful version of EO. That love affair then inevitably led me, a few years later, to tackling the Russian original — and to my utter astonishment, I wound up memorizing huge chunks of it (as do the Russians themselves), and then translating the entire novel-in-verse into English verse myself. That year-long adventure was one of the most wonderful episodes of my entire life, and I tell the whole story in my Translator’s Preface to that book. Translating EO into my own style of English verse, and seeing it in other people’s styles in English, French, German, Italian, etc., gave me lots of metaphors for thinking about the human soul and how it can survive in other media — after all, here was the “soul of Russian literature” (for that’s truly how Russians conceive of Eugene Onegin, although non-Russians don’t generally know that) being transplanted into a radically different medium and yet surviving beautifully in, say, James Falen’s or Babette Deutsch’s anglicizations (and possibly in mine as well, but I can’t be objective on that score). I often wondered about the hypothetical sci-fi scenario in which all traces of the Russian original were destroyed, so that all that was left of Eugene Onegin was, say, James Falen’s English version of it — would Eugene Onegin have survived? And my answer always was a categorical and unmitigated “Yes!” And that’s how I think about the survival of human souls in other human brains — it’s just that the “translations” of souls aren’t nearly as high-quality as Falen’s is of Pushkin’s original. That’s too bad, but it’s life. You take what you can get. Pushkin puts it this way, in stanza 38 of Chapter II of Eugene Onegin, in my translation (he’s speaking of the young would-be poet named Lensky, who is in a cemetery standing near the graves of his parents and brooding poetically about life’s all-too-fleeting nature): And there he, on the the stark, dark marker Atop his parents’ graves, shed tears, And praised their ashes — darker, starker. Alas, life reaps too fast its years; All flesh is grass. Each generation, At heaven’s hidden motivation, Arises, blooms, and falls from grace; Another quickly takes its place. And thus our race, rash and impetuous, Ascends and has its day, then raves And hastens toward ancestral graves. All too soon, death’s sting will get to us; Aye, how our children’s children rush And push us from this world’s sweet crush. Notice, by the way, the captial “A” at the beginning of each line — just a whim of mine that I indulged myself in (and which, I think, improved the stanza’s quality, since this extra self-imposed constraint forced me to pay incredibly close attention to each and every word choice in each and every line to make everything flow effortlessly despite the constraint). And here, just for the sake of comparison, is James Falen’s marvelously mellifluous and lyrical version of that same stanza: And then with verse of quickened sadness He honored too, in tears and pain, His parents’ dust... their memory’s gladness... Alas! Upon life’s furrowed plain — A harvest brief, each generation, By fate’s mysterious dispensation, Arises, ripens, and must fall; Then others too must heed the call. For thus our giddy race gains power: It waxes, stirs, turns seething wave, Then crowds its forebears toward the grave. And we as well shall face that hour When one fine day our grandsons true Straight out of life will crowd us too! I hope that by comparing these two English versions of the Russian original, you can get a sense for how “the same pattern” exists in them all — two of them in the medium or substrate of English, and one of them (unseen here) in the substrate of Russian. To me, this is a wonderful metaphor for soul transplantation.
|
Mistaken attribution [by none] [3 replies in thread] [Unfold this thread] |
narcissism and self-reference [by a former student of DRH] [Unfold this reply] | ||
narcissism and self-reference [by Tal Cohen] [Unfold this reply] | ||
a former student of DRH
writes in reply to Tal Cohen: narcissism and self-reference Wow. He has changed. Yeah. I was wrong about him. | ||
[335] Posted on Tuesday, 17 June 2008 at 3:43 GMT [Reply to this] [Permalink] |
Nobody who stumbled upon this
writes in reply to a former student of DRH: narcissism and self-reference Yes, I am sure in the few years his personality completely reversed itself and it just *seems* like he is still a self involved prick. That question hit it right on the mark, about him wanting to let his soul survive by injecting his personality into different mediums, even if he denied it. And even tho he denied it, he is very much like kurzweil in wanting to live forever just like him. I like how he ended it by cherry picking his verse and comparing it to the other guys. Shameless. | ||
[746] Posted on Friday, 22 February 2013 at 6:42 GMT [Reply to this] [Permalink] |
Fascinating stuff, I'll have to pick up a copy of this book [by Daniel Bigham] [Unfold this reply] | ||
Asinine [by Lucia] [4 replies in thread] [Unfold this thread] |
Bob Morlock
writes: Repugnant? Hofstadter calling Kurzweil's predictions of immortal soul repugnant and greedy because it's all based on fear of one's own death is quite ironic considering that life and greed are really two sides of the same coin. The second irony is that this sort of ''evolution'' (the singularity) isn't artificial or unnatural at all since it's all coming from our very own intellect, i.e. the result of million of years of evolution. So, if you hate those ideas, well, blame humankind itself. Maybe Hofstadter's real fear is that machines will someday have more soul than he does... | ||
[312] Posted on Friday, 13 June 2008 at 13:29 GMT [Reply to this] [Permalink] |
Repugnant? [by kyb] [2 replies in thread] [Unfold this thread] |
ari
writes in reply to Bob Morlock: Repugnant? Although there are evolutionary progressions that occur all the time I doubt that this will be the case with humans and technology. I maintain that technology is just a tool and not an evolutionary step. In fact, I resent the idea that humanity can be encapsulated by humanity. I see it as a rather self-absorbed notion that anyone could truly believe that the 'soul' as referenced here could be replicated in a machine. After all at its best would it not just be an echo incapable of unique expression? | ||
[624] Posted on Thursday, 14 October 2010 at 10:49 GMT [Reply to this] [Permalink] |
(No subject) [by (anonymous)] [Unfold this reply] | ||
(No subject) [by (anonymous)] [Unfold this reply] | ||
Che
writes in reply to (anonymous): Quote: ''Kurzweil is wrong and this guy is right. A mere hardware is not enough to simulate a proper 'soul' with a high huneker score. It is enough to make semi-intelligent tools but it will never be close to the real deal.'' Never say never dog. It's just not smart. | ||
[327] Posted on Saturday, 14 June 2008 at 3:56 GMT [Reply to this] [Permalink] |
Luke Armstrong
writes in reply to Che: Never Say Never Dog? It's just impossible to say that and sound smart dog. | ||
[672] Posted on Sunday, 06 March 2011 at 23:32 GMT [Reply to this] [Permalink] |
(No subject) [by Dani Blance] [Unfold this reply] | ||
How will the transplanted soul evolve [by anonymous] [3 replies in thread] [Unfold this thread] |
Totally unsatisfying! [by Beepoh] [Unfold this reply] | ||
Mike A.
writes in reply to Beepoh: RE: Totally unsatisfying! Agreed. Frankly, I think it is a huge mistake to be speaking of ''souls'' in a serious discussion of intelligence and consciousness. We have no reason whatsoever for asserting that consciousness has any ''magic'' component that isn't a consequence of the operation of the nervous system. Brains are 3.5 pounds of matter doing what matter does. As such, while we may not know how to construct a ''strong'' artificial intelligence, there is no basis, IMO, for baldly asserting that only our squishy brains can support sentience. | ||
[326] Posted on Saturday, 14 June 2008 at 0:38 GMT [Reply to this] [Permalink] |
RE: Totally unsatisfying! [by Understanding] [Unfold this reply] | ||
Keith
writes in reply to Mike A.: RE: Totally unsatisfying! I think Beepoh and Mike A. are having a little trouble seeing the Simmballs through the Simms. | ||
[461] Posted on Saturday, 19 December 2009 at 15:17 GMT [Reply to this] [Permalink] |
Totally unsatisfying! [by Dani Blance] [2 replies in thread] [Unfold this thread] |
Totally unsatisfying! [by (anonymous)] [2 replies in thread] [Unfold this thread] |
Comments on Brain Evolution [by Ken Magel] [Unfold this reply] | ||
comments [by James D. Newman] [Unfold this reply] | ||
Blahblahblah [by Ryan Dean] [2 replies in thread] [Unfold this thread] |
Harmonic resonance [by Tracy] [Unfold this reply] | ||
(anonymous)
writes in reply to Tracy: Harmonic resonance Harmonic resonance I thoughrally enjoyed this review and find so many of Hofstader?s ideas ring true with my thoughts. One of the main ones being how the depth of our existence has been trivialized by science. For example, biology can provide a fairly comprehensive classification of a plant. Society accepts the scientific classification of a plant and it is written off as having been explained. But there is a whole sub-microscopic world within the plant that is overlooked. I use a plant as an example, but this is the case of all living things classified by science. There is so much depth within the planet and universe that we have missed just by accepting science?s shallow classifications. I do find science a very useful tool which has gotten us quite far. But I still believe the stage of science that we are at is equivalent to looking at neutrinos through coke bottle lenses, or we are only 1% successful in explaining the world and existence with science so far. You had me up until you mentioned ''life force'' and ''cosmic'' something. If you didnt mean those words to have the type of new agey connotations that they have unfortunately acquired then I apologise. Words like ''life force'' really need to be qualified somewhat like Hofstadter had to qualify the word ''soul'' as something areligious before it had any kind of justification to be in such a book as I am a Strange Loop. What your saying is absolutely right apart from that though. Science is extremely valuable as a tool for engaging and dealing with the world. And while the classifications which it inevitibly makes in order to render the world comprehensible are necessary for our day to day existence, the idea that we are reaching ''the truth'' or that the scientific mode of description is adequate to fully understand the world is an entirely false one. What is most harmful about the prevalence of this one-mode-among-many of describing the world is the fact that a tool (as this worldview is), through its use, makes tools out of the entirety of the rest of the world. Instead of looking at the brain and learning about it and wondering about how amazing it is, we are busy trying to dismantle it in order to rebuild it artificially in order to transfer our ''consciousnesses'' if such a thing exists so that we can live in some absurd digital utopia | ||
[373] Posted on Thursday, 02 April 2009 at 11:04 GMT [Reply to this] [Permalink] |
Corwin
writes in reply to (anonymous): Harmonic resonance Maybe you should've stopped reading at ''I believe''. What other qualification do you need than the statement that the following is a belief and not purely based on data or studies? | ||
[569] Posted on Thursday, 04 March 2010 at 3:21 GMT [Reply to this] [Permalink] |
conversation ender [by dr knowitall] [4 replies in thread] [Unfold this thread] |
thank you drh! [by mst] [Unfold this reply] | ||
He's getting older [by Keith Ackermann] [Unfold this reply] | ||
No I [by Alain Bernay] [3 replies in thread] [Unfold this thread] |
Recursion and the soul [by Paul Wolf] [Unfold this reply] | ||
spinoza
writes: He's so close... DRH is so close. Here's an important idea missing from his thought process in the book: The feel of ''I'' lies only in the ''middle'' of the spectrum of what it means to be conscious of one's self, and is not the ''peak'' or highest level of it. By ''level'' I mean the intensity or clarity of attention upon itself. When the parietal lobe participates in our ''feel'' for extension in time, this gives rise to the distinction between ''I'' and those things which appear as ''Not I''. As the parietal lobe ''quiets down'' through long periods of intense focused attention, the feel for ''I'' begins to fade, and is replaced by a feel of ''everything is I, and I is everything''. Another trick of the spatial\temporal functions of the brain, based on how skillfully we learn to use it. This new ''non-self'' feeling is achieved only when the mind nears the ''top'' of the attention spectrum (long intense focused attention is usually required). This state is rarely achieved by populations, and so we all just culturally assume that the ''I'' is the best or ''real'' notion of what it means to feel conscious. DRH is right to say the ''I'' is a kind of necessary hallucination, but he fails to wonder if the illusion of ''I'' may actually be part of a spectrum of conscious-type notions. And that if the processes which give rise to ''I'' were themselves given a ''boost'', that the mind may actually perceive a qualitatively different picture of itself (perhaps a more accurate one). What I'm saying here doesn't help explain the main problem of how we come to have these experiences, but it does suggest that the feel of ''I'' may only be a less skilled, and therefore incomplete (rather than illusory) view which could be improved with practice. Furthermore, to improve the conversation we must sub-divide the unhelpful encapsulating labels of ''consciousness'' or ''I'' statements into more helpful ones, like ''attention'', that can be used to discuss conscious-like states in their varying degrees. For example, as I wake in the morning, I get a ''1'' on the ''conscious attention scale'', and after I've had my coffee, I get a ''3''. Riding my motorcycle on the way home gets me to ''5'', and focused mediation on DRH's ideas about myself gets me to a ''7'' and so forth... | ||
[397] Posted on Wednesday, 17 June 2009 at 0:40 GMT [Reply to this] [Permalink] |
StanislawStapledon
writes in reply to spinoza: He's so close... May I suggest reading the works of American Philosopher of Mind/Consciousness KEN WILBER, who tries to map out the Full Spectrum of Mental States of Consciousness - from subatomic vibrations to mystical experiences of Enlightenment/Epiphany/Oneness/Absolute Causality and the like. WILBERS lifetime-work is deeply rooted in (Neo)Platonims (Plotin especially) and German Idealism of the Early Romantic type like Fichte & Schelling. It all comes down to this: A = B(D(H(...) + I(...) + ...) + E(J(...) + K(...) + ...) + C(F(...) + G(...) + ...) + ... <=> A = A or just A. () means Holarchic Order. + means Aggegration on the same Holarchic Level. = means Identity as in Sameness. A Holarchic Chain of Being, which is at the lowest and highest level all the SAME. It is completely unknown how deep and wide this Holarchic Order really is. | ||
[842] Posted on Saturday, 04 January 2014 at 23:59 GMT [Reply to this] [Permalink] |
Soul of a Robot [by Eric] [Unfold this reply] | ||
Archer Krantz
writes: inspiration! Ah, Hofstadter! This interview was a joy to read. I read Mind's I when I was in junior high, and being infatuated with the mystery and wonder of consciousness it filled a great hunger. That said, I was totally disappointed when I got the ant-colony chapter and they failed to see what I saw at the time as a huge philosophical gap yet to be addressed... but I haven't thought about it much in recent years so I wonder if I'd see that same gap now? Okay, thinking about again now I remember the problem I had. The explanatory gap was this: how do individual primordial minds (ants, neurons, whatever) make the discrete transition to one unified mind? Some argue that our minds are not unified, but I hallucinate myself as unified, my experience is all I can know from, and I cannot help but need an explanation for my unification. It is the same question as: When does consciousness ''start'' in the development of a prenatal human or other animal? At what point? And what is so special about this moment, physically? My answer has been that an individual consciousness never does begin--it simply always has been. In the development of animals we must amplify or expand self-hallucinations which have always been. So the leap doesn't have to be made! All right then, I'm satisfied for the moment with a new conception of the world as abundant, perhaps infinite, information recursions which are amplified in the development of organisms, espcially the animals. Little soul-seeds. Maybe DRH and Dennet saw it like this when they chose that ant-colony chapter. I wish I could know how they'd answer this question. P.S. I'm going to read Strange Loop now, with the express intent of comparing it to the Radical Constructivism (a philosphy, sort of) of Ernst von Glasersfeld. He draws on cyberntics (recursion, computation, perception) as well as linguistics, with references to translation. His philosophy deals with the process of knowing and the cognition and should be of interest to some DRH fans. | ||
[426] Posted on Friday, 16 October 2009 at 14:14 GMT [Reply to this] [Permalink] |
Phil Ballla
writes: good indebtedness to you & Hofstadter Thank you for your gowd work putting Hofstadter in such condensed form so I can see such lovely parts in him from where I live in the mountains of Kyushu, Japan -- without books, without access to them -- but with access to computer. I like Hofstadter's reverence for and celebration of humanity for the ineffable thing in it that I'll call ''context.'' I like his translation of Eugene Onegin for its rhythm and musicality -- surpassing Falen's for same bit shown. Hofstadter reminds me of the key to genius in the poet Joseph Brodsky -- the love of what the Russian-American called ''loose ends.'' | ||
[446] Posted on Saturday, 05 December 2009 at 22:21 GMT [Reply to this] [Permalink] |
writes: Metzinger's Ego Tunnel Thomas Metzinger: The Ego Tunnel Has any one seen a thoughtful review. Metzinger talks less about the mechanics of how the neural representations come to be experienced as self. However, overall it followed more clearly for me and I was especially interested in his discussion of societal implication down the road, good and risky. | ||
[577] Posted on Tuesday, 20 April 2010 at 19:06 GMT [Reply to this] [Permalink] |
(anonymous)
writes in reply to martinc2@aol.com: Metzinger's Ego Tunnel Learn something new everyday, excellent insight. | ||
[578] Posted on Thursday, 29 April 2010 at 17:17 GMT [Reply to this] [Permalink] |
darren
writes: none GEB is about the possibilities, I (Me & You) choose the perspective. | ||
[600] Posted on Monday, 05 July 2010 at 12:35 GMT [Reply to this] [Permalink] |
Adelbert Shah
writes: Current influence on research I don't see any problems, for none of the different parties... Nowadays just every single theme and/or subject is forged down into hard rock lyrics (I am speaking in the year 2010). Please, all of you competitors (and as alleged), back-up your sincere own minds right now, and all instantly get in cryo. You won't even have to wake back up - for no single reason at all - since Philosphy will do it for you; and best of all, with no more censorship around. Very non-american greetz. | ||
[621] Posted on Monday, 30 August 2010 at 16:43 GMT [Reply to this] [Permalink] |
Jose_X
writes: A ''hardware'' body might not do To me soul/mind/sensation is an unexplained effect (how do you explain a sense) that is intertwined with something physical. It is not the digital abstract knowledge but the physical sensation living out what we abstract as ''knowledge''. We experience sensation ''through'' chemical and electrical effects along neurons, I figure. The higher the level of consciousness, I will guess, the greater the number of neurons and charge (or some other physical phenomenon) participating. Knowledge is the association of a very conscious sensation (eg, as mental words) with another and very particular conscious sensation that effectively means to us ''this is knowledge; it is correct; I can reproduce it''. We also can try to effect the knowledge in some way physically (eg, as our minds trigger into physical action) or by looking up more ''knowledge'' sensations (eg, think and reason) to try and confirm it. To ''confirm'' is to energize or whatever more of these knowledge linkages, re-affirming existing connections and ultimately creating or strengthening other connections (the ''conclusion'') or else a ''contradiction'' is reached as the implied neural action through one path would require that one or more past established paths be undone and we are able to recognize this (keyed in by pain sensation from being unable to easily enough uncement too many nearby connections near that section we want to undo to effect our implied change/contradiction), stop the uncementing from happening completely and tag the initiating ''knowledge'' as ''false'' for the time being or invert those connections to suggest the ''inverse'' knowledge. Our consciousness becomes these existing ''knowledge chunks'' linked to each other and being traversed; this is us ''thinking''.] If the earth has a mind, I don't see why I would ever know. If that mind can lead to the movement of mountains or anything else in some time frame or other, I may or may not study sufficient recorded data points to deduce that is going on. And it might just be that every single event we experience and all physics and science which we deem to be predictable or to follow laws of nature (including the unpredictability from complexity we can't measure and from probabilistic mechanisms as we have attributed to QM) has a mind behind it somehow. Yes, even our minds are likely but forces of nature working predictably but allowing our minds, through sensations, to go along for the ride. The body is just an object of sufficient mass, physical rigor, and ability to exert controlled forces coordinated with other physical parts (nerves, etc) closely tied to the physical stuff through which we experience consciousness. What defines the focus of consciousness? Ie, why would there appear to be a mind in control here and by ''reasoning'' another over there? Likely if we combine our neurons and what not, our consciousnesses will meld a bit. I'm not sure but something biological, chemical, and/or physical within our brains likely offers the answer (it might just be the proximity of so much neural action. | ||
[631] Posted on Wednesday, 17 November 2010 at 14:17 GMT [Reply to this] [Permalink] |
Jose_X
writes in reply to Jose_X: A ''hardware'' body might not do I forgot to add. A major part of having sanity is being able to deal with potential mental conflicts in a way where you are a more fit organism. Ie, it's not to be taken for granted that we can identify the paths that would have to be uncemented and handle that properly or to return to the initiating knowledge and find its ''inverse''. Strength comes from many reinforcing linkages but we cannot use mere reinforcement to make a given theory/knowledge be correct if we can't resolve the pieces or conclusion experimentally (ie, with our physical bodies, including eyes, hears, and other sense organs). So the key and luck is to come up with a good model (set of knowledge chunks) before we go insane or give up on the topic. Each knowledge chunk is but neurons, I gather, linked/triggered in a way from which we can eg utter the words and visualize the statement (or some other reproducible sensation if we are mute, blind, deaf, etc). Ie, we can trigger the sense/motor organ work in a consistent manner to/from these neurons. ''Consistent'' in the sense that the neurons representing ''hello'' both are energized when we hear that word and can lead to us speaking that when we start from those neurons and that we cycle right back upon hearing our own spoken word. This verification that the ''hello'' neurons are one and the same all the time under us hearing or speaking or seeing that word time after time is key to us maintaining those neurons for that purpose. A failure likely means we did not enunciate well or we have damage to our organs or we are in an echo or other type of chamber, etc, that we will have to learn to re-sync with at some point in time. So life as a baby starts off by designation of neuron clusters to simple ''statements'' that relate to simple images and sounds. We associate sounds and visuals and reject or accept the simple ''conclusion'' based on pain/pleasure and/or (in)ability to effect. Pain/pleasure and physical (including hearing, seeing, etc) effectibility is what guides what we will accept as true or good vs not. We move up creating many chunks as necessary, eventually, a chunk for the words we hear and then for associations in logic and other abstractions related to these sounds and pictures. We learn that arm is a-r-m visually and aurally and as well that it can be used as a replacement for referring to this thing hanging off my shoulders. In time we associate arm actions and more complex associations and words. Everything new is attached to our existing neural net. Under this model, ''neurons'' being energized is our sensations and consciousness that fairly quickly associate with simply phrase/word/letter clusters. A general neural cluster might not be 10 words, but rather be a concept, which based on immediate linkages and nearby linkages, we can then express quickly through some words. Anyway, the magic to mental words is to have some group of neurons be able to serve as a place marker that can quickly lead to the physical sensation of those words (speak/write or hear/see them). So having paper and accidentally as a society coming upon particular symbols (pictures, eventually decomposable into letters from small alphabet) and rules (grammar, starting with just nouns) is what enables our brains to capture these tangible entities to use later on for anaelysis/reasoning/learning/etc that might be more advanced. Ie, we can use the neurons associated with a-r-m as it looks written on paper in place where we want to consider the actual arm. Letters might be more efficient than thinking strictly through pictures to the extent words are an abstraction (made tangible through the particular spelling of what the word represents) that can be designed to cover an endless number of concepts efficiently. Ie, instead of 1 trillion pictures, we can have a language with many words and concepts represented as their equivalent descriptive phrases. Digitalization is a way to remove unimportant detail to reduce the information storage requirements. A picture has much detail that might not be important. A picture might not provide a consistent way to label many different things without confusion as would be possible through sentence phrases using the digital alphabet of (eg) 26 letters. A 2-D language (rather than a linear language like English) might be possible and exist to some degree but apparently linear is sufficient or represents the point in social evolution where we find ourselves and perhaps this has to do with biological limitations, eg, neurons being connected efficiently and compactly when as linear strings of them rather than in 2D meshes. That we can capture and save images is seen in earlier organisms since that is required to survive (identify good or bad food, good or bad situation, etc). | ||
[632] Posted on Wednesday, 17 November 2010 at 15:25 GMT [Reply to this] [Permalink] |
Jose_X
writes in reply to Jose_X: A ''hardware'' body might not do Re-enforcement happens in part from many nearby linkages locked in a tight stable topology.. suggestive of a ''tight'' theory or related re-enforcing theories. Also, using a more efficient (simpler) model to understand something implies fewer neurons used. So we want the simplest possible theories but no simpler since too simple would then imply we could not interlock as well with other things in our head (eg, we could not answer certain questions posed and hence could not embed (or lock in or ...) this particular cluster into a wider strong context). ''Embedding'' a neural cluster within the existing net can be seen to achieve abstraction. It requires fewer neurons and the bits that identify the new context reinforce and are reinforced by all other things related to the particular abstraction. A recent study claimed bees can solve the traveling salesman problem very fast. It might just be that we can very well assign a certain amount of charge to a region (the bee would have the neurons optimized in a grid/lattice layout) perhaps by ''tying down'' various neural points and then the neural path able to be energized with a given amount of energy or less would naturally arise from the physical electromagnetic potential forces. Thus traveling salesman solution would be effected (up to some limit) as the path energized with the least amount of charge. The bee would only have so much resolution (unit distance in world length implied by total distance to be covered and number of neurons in neural lattice). Perhaps over many fast reductions the amount of charge goes down and down. Why minimal charge? Well, that would involve lowest energy consumption, which might simply be a ''pleasurable'' state for the bee. Also, why so fast for a little bee? Because it would be solved with a neural model closely associated with the problem being solved. A computer or person would use higher level abstractions to simulate the problem components. This would be much more bulky and would not be solvable through a more direct ''anaelog'' physics. | ||
[633] Posted on Wednesday, 17 November 2010 at 15:48 GMT [Reply to this] [Permalink] |
Rich Falzone
writes: An Interview with Douglas R. Hofstadter, following ''I am a Stra I just now (in 2012) discovered this website. I notice the last post was in 2010, and I hope it's not too late now (in 2012) to add my own thoughts. I read GEB, Mind's I & etc. in the 80's, Le Bon Ton and Strange Loop more recently. I am very impressed with Hofstadter's writings, and his intellect which gives them such force. I was very impressed with GEB at the time I read it, and was convinced that yes, the basis of conscious thought just had to be software - how could it be otherwise? These days (in my 70's) I dunno. The conceptual framework of H. Sapiens may not be powerful enough to capture the concepts required to understand consciousness. Kind of like a parakeet trying to understand long division. In any case, I think that to understand consciousness one must first understand the fundamental nature of physical existence itself. This is not meant to be pessimistic! Merely realistic. | ||
[734] Posted on Tuesday, 15 May 2012 at 23:09 GMT [Reply to this] [Permalink] |
Jonathan Gagen
writes: The impossibility of copying a soul and the definition of a soul a ''soul'' cannot be defined as a loop rather it is a tao. in the tao te ching one of the first principles is that the origin of all particular things is naming or ''abstraction''. abstraction cannot be defined as a ''hallucination'' because such a process describes a complex chemical process but rather think of the original soul as a singularity which takes the existance of one of jung's architypes but this one is original in every meaning of the word it is self and self or ''i'' works by mapping out the entire non-existant-mathmatical construct of the multiverse as abstractions or jung's common archytypes the only difference is that the archytypes are secondary to one base archytype the ''i'' and they take the place of physical objects as abstract concepts in a mathmatical abstracted grid (or the abstraction of the multiverse) and this multiverse abstraction is filled with minor ''namings'' or abstractions which becomes a map that we contsruct as the ''hallucination of self'' and this ''hallucination'' becomes that strange loop and the loop if copied would reamain useless without a base archytype or true ''i'' because the loop itself is a constructed abstracted multiverse of the ''I'' (a world if you will in which the ''I'' lives in order to survive. therefore, AI would be completely useless without a base archytype. | ||
[736] Posted on Tuesday, 05 June 2012 at 5:25 GMT [Reply to this] [Permalink] |
Dylan Gillis
writes: soul as software This is all very interesting and many great and profound ideas have been expressed. And I have read only part of all the responses. So I apologize ahead of time if I repeat something already brought up. What has occurred to me to add to the discussion is the question: if one's self or soul can be recorded and transferred as a pattern through whatever medium (biological, technological or spiritual), wouldn't it also be able to be ''copied''? That is, I wonder where ''I'' would be if there were more than one copy of the pattern that makes up my ''self'' or ''soul''. Something about that conundrum makes me doubt that true existence of self or soul can actually be done by replication of a pattern, however complex, accurate and fine grained it is. So, like maybe, there is something else involved, like a point of view, or a ''looking from'' that is fundamental to one's being, which cannot be transferred in the manner(s) being discussed. Just saying. | ||
[803] Posted on Wednesday, 06 November 2013 at 21:59 GMT [Reply to this] [Permalink] |