architectural structures in the heart, just as car buyers don’t think about the physics of protons and neutrons or the chemistry of alloys, but concentrate instead on high abstractions such as comfort, safety, fuel efficiency, maneuverability, sexiness, and so forth. And thus, to close out my heart–brain analogy, the bottom line is simply that the microscopic level may well be — or rather, almost certainly is — the wrong level in the brain on which to look, if we are seeking to explain such enormously abstract phenomena as concepts, ideas, prototypes, stereotypes, analogies, abstraction, remembering, forgetting, confusing, comparing, creativity, consciousness, sympathy, empathy, and the like.

Can Toilet Paper Think?

Simple though this analogy is, its bottom line seems sadly to sail right by many philosophers, brain researchers, psychologists, and others interested in the relationship between brain and mind. For instance, consider the case of John Searle, a philosopher who has spent much of his career heaping scorn on artificial-intelligence research and computational models of thinking, taking special delight in mocking Turing machines.

A momentary digression… Turing machines are extremely simple idealized computers whose memory consists of an infinitely long (i.e., arbitrarily extensible) “tape” of so-called “cells”, each of which is just a square that either is blank or has a dot inside it. A Turing machine comes with a movable “head”, which looks at any one square at a time, and can “read” the cell (i.e., tell if it has a dot or not) and “write” on it (i.e., put a dot there, or erase a dot). Lastly, a Turing machine has, stored in its “head”, a fixed list of instructions telling the head under which conditions to move left one cell or right one cell, or to make a new dot or to erase an old dot. Though the basic operations of all Turing machines are supremely trivial, any computation of any sort can be carried out by an appropriate Turing machine (numbers being represented by adjacent dot-filled cells, so that “•••” flanked by blanks would represent the integer 3).

Back now to philosopher John Searle. He has gotten a lot of mileage out of the fact that a Turing machine is an abstract machine, and therefore could, in principle, be built out of any materials whatsoever. In a ploy that, in my opinion, should fool only third-graders but that unfortunately takes in great multitudes of his professional colleagues, he pokes merciless fun at the idea that thinking could ever be implemented in a system made of such far-fetched physical substrates as toilet paper and pebbles (the tape would be an infinite roll of toilet paper, and a pebble on a square of paper would act as the dot in a cell), or Tinkertoys, or a vast assemblage of beer cans and ping-pong balls bashing together.

In his vivid writings, Searle gives the appearance of tossing off these humorous images light-heartedly and spontaneously, but in fact he is carefully and premeditatedly instilling in his readers a profound prejudice, or perhaps merely profiting from a preexistent prejudice. After all, it does sound preposterous to propose “thinking toilet paper” (no matter how long the roll might be, and regardless of whether pebbles are thrown in for good measure), or “thinking beer cans”, “thinking Tinkertoys”, and so forth. The light- hearted, apparently spontaneous images that Searle puts up for mockery are in reality skillfully calculated to make his readers scoff at such notions without giving them further thought — and sadly, they often work.

The Terribly Thirsty Beer Can

Indeed, Searle goes very far in his attempt to ridicule the systems that he portrays in this humorous fashion. For example, to ridicule the notion that a gigantic system of interacting beer cans might “have experiences” (yet another term for consciousness), he takes thirst as the experience in question, and then, in what seems like a casual allusion to something obvious to everyone, he drops the idea that in such a system there would have to be one particular can that would “pop up” (whatever that might mean, since he conveniently leaves out all description of how these beer cans might interact) on which the English words “I am thirsty” are written. The popping-up of this single beer can (a micro-element of a vast system, and thus comparable to, say, one neuron or one synapse in a brain) is meant to constitute the system’s experience of thirst. In fact, Searle has chosen this silly image very deliberately, because he knows that no one would attribute it the slightest amount of plausibility. How could a metallic beer can possibly experience thirst? And how would its “popping up” constitute thirst? And why should the words “I am thirsty” written on a beer can be taken any more seriously than the words “I want to be washed” scribbled on a truck caked in mud?

The sad truth is that this image is the most ludicrous possible distortion of computer-based research aimed at understanding how cognition and sensation take place in minds. It could be criticized in any number of ways, but the key sleight of hand that I would like to focus on here is how Searle casually states that the experience claimed for this beer-can brain model is localized to one single beer can, and how he carefully avoids any suggestion that one might instead seek the system’s experience of thirst in a more complex, more global, high-level property of the beer cans’ configuration.

When one seriously tries to think of how a beer-can model of thinking or sensation might be implemented, the “thinking” and the “feeling”, no matter how superficial they might be, would not be localized phenomena associated with a single beer can. They would be vast processes involving millions or billions or trillions of beer cans, and the state of “experiencing thirst” would not reside in three English words pre-painted on the side of a single beer can that popped up, but in a very intricate pattern involving huge numbers of beer cans. In short, Searle is merely mocking a trivial target of his own invention. No serious modeler of mental processes would ever propose the idea of one lonely beer can (or neuron) for each sensation or concept, and so Searle’s cheap shot misses the mark by a wide margin.

It’s also worth noting that Searle’s image of the “single beer can as thirst-experiencer” is but a distorted replay of a long-discredited idea in neurology — that of the “grandmother cell”. This is the idea that your visual recognition of your grandmother would take place if and only if one special cell in your brain were activated, that cell constituting your brain’s physical representation of your grandmother. What significant difference is there between a grandmother cell and a thirst can? None at all. And yet, because John Searle has a gift for catchy imagery, his specious ideas have, over the years, had a great deal of impact on many professional colleagues, graduate students, and lay people.

It’s not my aim here to attack Searle in detail (that would take a whole dreary chapter), but to point out how widespread is the tacit assumption that the level of the most primordial physical components of a brain must also be the level at which the brain’s most complex and elusive mental properties reside. Just as many aspects of a mineral (its density, its color, its magnetism or lack thereof, its optical reflectivity, its thermal and electrical conductivity, its elasticity, its heat capacity, how fast sound spreads through it, and on and on) are properties that come from how its billions of atomic constituents interact and form high-level patterns, so mental properties of the brain reside not on the level of a single tiny constituent but on the level of vast abstract patterns involving those constituents.

Dealing with brains as multi-level systems is essential if we are to make even the slightest progress in analyzing elusive mental phenomena such as perception, concepts, thinking, consciousness, “I”, free will, and so forth. Trying to localize a concept or a sensation or a memory (etc.) down to a single neuron makes no sense at all. Even localization to a higher level of structure, such as a column in the cerebral cortex (these are small structures containing on the order of forty neurons, and they exhibit a more complex collective behavior than single neurons do), makes no sense when it comes to aspects of thinking like analogy-making or the spontaneous bubbling-up of episodes from long ago.

Levels and Forces in the Brain

I once saw a book whose title was “Molecular Gods: How Molecules Determine Our Behavior”. Although I didn’t buy it, its title stimulated many thoughts in my brain. (What is a thought in a brain? Is a thought really inside a brain? Is a thought made of molecules?) Indeed, the very fact that I soon placed the book back up on the shelf is a perfect example of the kinds of thoughts that its title triggered in my brain. What exactly determined my behavior that day (e.g., my interest in the book, my pondering about its title, my decision not to buy it)? Was it some molecules inside my brain that made me reshelve it? Or was it some ideas in my brain? What is the proper

Вы читаете I Am a Strange Loop
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату