in myriad ways. By the time this brain had lived in this body for a couple of years or so, the “I” notion was locked into it beyond any conceivable hope of reversal.
…But Am I Real?
And yet, was this “I”, for all its tremendous stability and apparent utility, a
What if the box had been sealed shut so I had no way of looking at the individual envelopes? What if my knowledge of the box of envelopes necessarily came from dealing with its hundred envelopes
If, in addition, it turned out that talking about this supposed marble had enormously useful explanatory power in my life, and if, on top of that, all my friends had similar cardboard boxes and all of them spoke ceaselessly — and wholly unskeptically — about the “marbles” inside
And thus it is with this notion of “I”. Because it encapsulates so neatly and so efficiently for us what we perceive to be truly important aspects of causality in the world, we cannot help attributing reality to our “I” and to those of other people — indeed, the highest possible level of reality.
The Size of the Strange Loop that Constitutes a Self
One more time, let’s go back and talk about mosquitoes and dogs. Do they have anything like an “I” symbol? In Chapter 1, when I spoke of “small souls” and “large souls”, I said that this is not a black-and-white matter but one of degree. We thus have to ask, is there a strange loop — a sophisticated level-crossing feedback loop — inside a mosquito’s head? Does a mosquito have a rich, symbolic representation of itself, including representations of its desires and of entities that threaten those desires, and does it have a representation of itself in comparison with other selves? Could a mosquito think a thought even vaguely reminiscent of “I can smile just like Hopalong Cassidy!” — for example, “I can bite just like Buzzaround Betty!”? I think the answer to these and similar questions is quite obviously, “No way in the world!” (thanks to the incredibly spartan symbol repertoire of a mosquito brain, barely larger than the symbol repertoire of a flush toilet or a thermostat), and accordingly, I have no qualms about dismissing the idea of there being a strange loop of selfhood in as tiny and swattable a brain as that of a mosquito.
On the other hand, where dogs are concerned, I find, not surprisingly, much more reason to think that there are at least the rudiments of such a loop in there. Not only do dogs have brains that house many rather subtle categories (such as “UPS truck” or “things I can pick up in the house and walk around with in my mouth without being punished”), but also they seem to have some rudimentary understanding of their own desires and the desires of others, whether those others are other dogs or human beings. A dog often knows when its master is unhappy with it, and wags its tail in the hopes of restoring good feelings. Nonetheless, a dog, saliently lacking an arbitrarily extensible concept repertoire and therefore possessing only a rudimentary episodic memory (and of course totally lacking any permanent storehouse of imagined future events strung out along a mental timeline, let alone counterfactual scenarios hovering around the past, the present, and even the future), necessarily has a self- representation far simpler than that of an adult human, and for that reason a dog has a far smaller soul.
The Supposed Selves of Robot Vehicles
I was most impressed when I read about “Stanley”, a robot vehicle developed at the Stanford Artificial Intelligence Laboratory that not too long ago drove all by itself across the Nevada desert, relying just on its laser rangefinders, its television camera, and GPS navigation. I could not help asking myself, “How much of an ‘I’ does Stanley have?”
In an interview shortly after the triumphant desert crossing, one gungho industrialist, the director of research and development at Intel (you should keep in mind that Intel manufactured the computer hardware on board Stanley), bluntly proclaimed: “Deep Blue [IBM’s chess machine that defeated world champion Garry Kasparov in 1997] was just processing power. It didn’t think. Stanley thinks.”
Well, with all due respect for the remarkable collective accomplishment that Stanley represents, I can only comment that this remark constitutes shameless, unadulterated, and naive hype. I see things very differently. If and when Stanley ever acquires the ability to form limitlessly snowballing categories such as those in the list that opened this chapter,
At one point, Stanley’s video camera picked up another robot vehicle ahead of it (this was H1, a rival vehicle from Carnegie-Mellon University) and eventually Stanley pulled around H1 and left it in its dust. (By the way, I am carefully avoiding the pronoun “he” in this text, although it was par for the course in journalistic references to Stanley, and perhaps also at the AI Lab as well, given that the vehicle had been given a human name. Unfortunately, such linguistic sloppiness serves as the opening slide down a slippery slope, soon winding up in full anthropomorphism.) One can see this event taking place on the videotape made by that camera, and it is the climax of the whole story. At this crucial moment, did Stanley recognize the other vehicle as being “like me”? Did Stanley think, as it gaily whipped by H1, “There but for the grace of God go I?” or perhaps “Aha, gotcha!” Come to think of it, why did I write that Stanley “gaily whipped by” H1?
What would it take for a robot vehicle to think such thoughts or have such feelings? Would it suffice for Stanley’s rigidly mounted TV camera to be able to turn around on itself and for Stanley thereby to acquire visual imagery of itself? Of course not. That may be one indispensable move in the long process of acquiring an “I”, but as we know in the case of chickens and cockroaches, perception of a body part does not a self make.
A Counterfactual Stanley
What is lacking in Stanley that would endow it with an “I”, and what does not seem to be part of the research program for developers of self-driving vehicles, is a deep understanding of its place in the world. By this I do not mean, of course, the vehicle’s location on the earth’s surface, which is given to it down to the centimeter by GPS; it means a rich representation of the vehicle’s own actions and its relations to other vehicles, a rich representation of its goals and its “hopes”. This would require the vehicle to have a full episodic memory of thousands of experiences it had had, as well as an episodic projectory (what it would expect to happen in its “life”, and what it would hope, and what it would fear), as well as an episodic subjunctory, detailing its thoughts about near misses it had had, and what would most likely have happened had things gone some other way.
Thus, Stanley the Robot Steamer would have to be able to think to itself such hypothetical future thoughts as, “Gee, I wonder if H1 will deliberately swerve out in front of me and prevent me from passing it, or even knock me off the road into the ditch down there! That’s what
An article in