(possibly but not necessarily a two-dimensional image) composed of a vast number of tiny signals, but then it goes much further, eventually winding up in the selective triggering of a small subset of a large repertoire of dormant symbols — discrete structures that have representational quality. That is to say, a symbol inside a cranium, just like a simmball in the hypothetical careenium, should be thought of as a triggerable physical structure that constitutes the brain’s way of implementing a particular category or concept.

I should offer a quick caveat concerning the word “symbol” in this new sense, since the word comes laden with many prior associations, some of which I definitely want to avoid. We often refer to written tokens (letters of the alphabet, numerals, musical notes on paper, Chinese characters, and so forth) as “symbols”. That’s not the meaning I have in mind here. We also sometimes talk of objects in a myth, dream, or allegory (for example, a key, a flame, a ring, a sword, an eagle, a cigar, a tunnel) as being “symbols” standing for something else. This is not the meaning I have in mind, either. The idea I want to convey by the phrase “a symbol in the brain” is that some specific structure inside your cranium (or your careenium, depending on what species you belong to) gets activated whenever you think of, say, the Eiffel Tower. That brain structure, whatever it might be, is what I would call your “Eiffel Tower symbol”.

You also have an “Albert Einstein” symbol, an “Antarctica” symbol, and a “penguin” symbol, the latter being some kind of structure inside your brain that gets triggered when you perceive one or more penguins, or even when you are just thinking about penguins without perceiving any. There are also, in your brain, symbols for action concepts like “kick”, “kiss”, and “kill”, for relational concepts like “before”, “behind”, and “between”, and so on. In this book, then, symbols in a brain are the neurological entities that correspond to concepts, just as genes are the chemical entities that correspond to hereditary traits. Each symbol is dormant most of the time (after all, most of us seldom think about cotton candy, egg-drop soup, St. Thomas Aquinas, Fermat’s last theorem, Jupiter’s Great Red Spot, or dental-floss dispensers), but on the other hand, every symbol in our brain’s repertoire is potentially triggerable at any time.

The passage leading from vast numbers of received signals to a handful of triggered symbols is a kind of funneling process in which initial input signals are manipulated or “massaged”, the results of which selectively trigger further (i.e., more “internal”) signals, and so forth. This batonpassing by squads of signals traces out an ever-narrowing pathway in the brain, which winds up triggering a small set of symbols whose identities are of course a subtle function of the original input signals.

Thus, to give a hopefully amusing example, myriads of microscopic olfactory twitchings in the nostrils of a voyager walking down an airport concourse can lead, depending on the voyager’s state of hunger and past experiences, to a joint triggering of the two symbols “sweet” and “smell”, or a triggering of the symbols “gooey” and “fattening”, or of the symbols “Cinnabon” and “nearby”, or of the symbols “wafting”, “advertising”, “subliminal”, “sly”, and “gimmick” — or perhaps a triggering of all eleven of these symbols in the brain, in some sequence or other. Each of these examples of symbol-triggering constitutes an act of perception, as opposed to the mere reception of a gigantic number of microscopic signals arriving from some source, like a million raindrops landing on a roof.

In the interests of clarity, I have painted too simple a picture of the process of perception, for in reality, there is a great deal of two-way flow. Signals don’t propagate solely from the outside inwards, towards symbols; expectations from past experiences simultaneously give rise to signals propagating outwards from certain symbols. There takes place a kind of negotiation between inward-bound and outward-bound signals, and the result is the locking-in of a pathway connecting raw input to symbolic interpretation. This mixture of directions of flow in the brain makes perception a truly complex process. For the present purposes, though, it suffices to say that perception means that, thanks to a rapid two-way flurry of signal-passing, impinging torrents of input signals wind up triggering a small set of symbols, or in less biological words, activating a few concepts.

In summary, the missing ingredient in a video system, no matter how high its visual fidelity, is a repertoire of symbols that can be selectively triggered. Only if such a repertoire existed and were accessed could we say that the system was actually perceiving anything. Still, nothing prevents us from imagining augmenting a vanilla video system with additional circuitry of great sophistication that supports a cascade of signal-massaging processes that lead toward a repertoire of potentially triggerable symbols. Indeed, thinking about how one might tackle such an engineering challenge is a helpful way of simultaneously envisioning the process of perception in the brain of a living creature and its counterpart in the cognitive system of an artificial mind (or an alien creature, for that matter). However, quite obviously, not all realizations of such an architecture, whether earthbound, alien, or artificial, will possess equally rich repertoires of symbols to be potentially triggered by incoming stimuli. As I have done earlier in this book, I wish once again to consider sliding up the scale of sophistication.

Mosquito Symbols

Suppose we begin with a humble mosquito (not that I know any arrogant ones). What kind of representation of the outside world does such a primitive creature have? In other words, what kind of symbol repertoire is housed inside its brain, available for tapping into by perceptual processes? Does a mosquito even know or believe that there are objects “out there”? Suppose the answer is yes, though I am skeptical about that. Does it assign the objects it registers as such to any kind of categories? Do words like “know” or “believe” apply in any sense to a mosquito?

Let’s be a little more concrete. Does a mosquito (of course without using words) divide the external world up into mental categories like “chair”, “curtain”, “wall”, “ceiling”, “person”, “dog”, “fur”, “leg”, “head”, or “tail”? In other words, does a mosquito’s brain incorporate symbols — discrete triggerable structures — for such relatively high abstractions? This seems pretty unlikely; after all, to do its mosquito thing, a mosquito could do perfectly well without such “intellectual” luxuries. Who cares if I’m biting a dog, a cat, a mouse, or a human — and who cares if it’s an arm, an ear, a tail, or a leg — as long as I’m drawing blood?

What kinds of categories, then, does a mosquito need to have? Something like “potential source of food” (a “goodie”, for short) and “potential place to land” (a “port”, for short) seem about as rich as I expect its category system to be. It may also be dimly aware of something that we humans would call a “potential threat” — a certain kind of rapidly moving shadow or visual contrast (a “baddie”, for short). But then again, “aware”, even with the modifier “dimly”, may be too strong a word. The key issue here is whether a mosquito has symbols for such categories, or could instead get away with a simpler type of machinery not involving any kind of perceptual cascade of signals that culminates in the triggering of symbols.

If this talk of bypassing symbols and managing with a very austere substitute for perception strikes you as a bit blurry, then consider the following questions. Is a toilet aware, no matter how slightly, of its water level? Is a thermostat aware, albeit extremely feebly, of the temperature it is controlling? Is a heat-seeking missile aware, be it ever so minimally, of the heat emanating from the airplane that it is pursuing? Is the Exploratorium’s jovially jumping red spot aware, though only terribly rudimentarily, of the people from whom it is forever so gaily darting away? If you answered “no” to these questions, then imagine similarly unaware mechanisms inside a mosquito’s head, enabling it to find blood and to avoid getting bashed, yet to accomplish these feats without using any ideas.

Mosquito Selves

Having considered mosquito symbols, we now inch closer to the core of our quest. What is the nature of a mosquito’s interiority? That is, what is a mosquito’s experience of “I”-ness? How rich a sense of self is a mosquito endowed with? These questions are very ambitious, so let’s try something a little simpler. Does a mosquito have a visual image of how it looks? I hope you share my skepticism on this score. Does a mosquito know that it has wings or legs or a head? Where on earth would it get ideas like “wings” or “head”? Does it know that it has eyes or a proboscis? The mere suggestion seems ludicrous. How would it ever find such things out? Let’s instead speculate a bit about our mosquito’s knowledge of its own internal state. Does it have a sense of being hot or cold? Of being tuckered out or full of pep? Hungry or starved? Happy or sad? Hopeful or frightened? I’m sorry, but even these strike me as lying well beyond the pale, for an entity as humble as a mosquito.

Вы читаете I Am a Strange Loop
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату