(3) They use abstract, symbolic, or verbal descriptions

(4) They use models that we have made of ourselves.

Now suppose that a brain could construct a resource called C that detects when all these are running at once:

If such a C-detector turned out to be useful enough, this could lead us to imagine that it detects the presence of some sort of ‘Consciousness-Thing!’ Indeed, we might even imagine that entity to be the cause of that set of activities, and our language systems might learn to connect this kind of detector to terms like ‘awareness,’ ‘myself,’ ‘attention,’ or ‘Me’. To see how this might be useful to us, let’s examine its four constituents.

Recent Memories: Why must consciousness involve memory? I’ve always thought of consciousness as about the present, not the past—about what’s happening right now.

For any mind (or any machine) to know what it has done, it needs some records of recent activities. For example, suppose that I asked, “Are you aware that you’re touching your ear?” Then you might reply, “Yes, I’m aware that I am doing that.” However, for you to make a statement like that, your language resources must be reacting to signals from other parts of your brain, which in turn have reacted to prior events. So, whatever you say (or think) about yourself, it takes time to collect that evidence.

More generally, this means that a brain cannot think about what it is thinking right now; the best it could do is to contemplate some records of some of its recent activities. There is no reason why some part of a brain could not think about what it has seen of the activities of other parts—but even then, there always will be at least so small delay in between.

Serial Processes. Why should our high-level processes tend to be more serial? Would it not be more efficient for us to do more things in parallel?

Most of the time in your everyday life, you do many things simultaneously; you have no trouble, all at once, to walk, talk, see, and scratch your ear. But few can do a passable job at drawing a circle and square at once by using both of their hands.

Citizen: Perhaps each of those two particular tasks demands so much of your attention that you can’t concentrate on the other one.

That would make sense if you assume that attention is some sort of thing that comes in limited quantities—but then we would need a theory about what might impose this kind of limitation, yet still can walk, talk and see all at once. One explanation of this could be that such limits appear when resources conflict. For, suppose that two tasks are so similar that they both need to use the same mental resources. Then if we try to do both jobs at once, one of them will be forced to stop—and the more such conflicts arise in our brains, the fewer such jobs we can do simultaneously.

Then why can we see, walk, and talk all at once? This presumably happens because our brains contain substantially separate systems for these—located in different parts of the brain—so that their resources don’t conflict so often. However, when we have to solve a problem that’s highly complex then we usually have only one recourse: somehow to break it up into several parts—each of which may require some high-level planning and thinking. For example each of those subgoals might require us to develop one or more little ‘theories’ about the situation—and then do some mental experiments to see if these are plausible.

Why can’t we do all this simultaneously? One reason for this could simply be that our resources for making and using plans has only evolved rather recently—that is, in only a few million years—and so, we do not yet have multiple copies of them. In other words, we don’t yet much capacity at our highest levels of ‘management’—for example, resources for keeping track of what’s left to be done and for finding ways to achieve those goals without causing too many internal conflicts. Also, our processes for doing such things are likely to use the kinds of symbolic descriptions discussed below—and those resources are limited too. If so, then our only option will be to focus on each of those goals sequentially.[56]

This sort of mutual exclusiveness could be a principle reason why we sometimes describe our thoughts as flowing in a ‘stream of consciousness’—or as taking the form of an ‘inner monologue’—a process in which a sequence of thoughts seems to resemble a story or narrative.[57] When our resources are limited, we may have no alternative to the rather slow ‘serial processing’ that so frequently is a prominent feature of what we call “high-level thinking.”[58]

Symbolic Descriptions: Why would we need to use symbols or words rather than, say, direct connections between cells in the brain?

Many researchers have developed schemes for learning from experience, by making and changing connections between various parts of systems called ‘neural networks’ or ‘connectionist learning machines.’[59] Such systems have proved to be able for learning to recognize various kinds of patterns—and it seems quite likely that such low-level processes could underlie most of the functions inside our brains.[60] However, although such systems are very useful at doing many useful kinds of jobs, they cannot fulfill the needs of more reflective tasks, because they store information in the form numerical values that are hard for other resources to use. One can try to interpret these numbers as correlations or likelihoods, but they carry no other clues about what those links might otherwise signify. In other words, such representations don’t have much expressiveness. For example, a small such neural network might look like this.

In contrast, the diagram below shows what we call a “Semantic Network” that represents some of the relationships between the parts of a three-block Arch. For example, each link that points to the concept supports could be used to predict that the top block would fall if we removed a block that supports it.

Thus, whereas a ‘connectionist network’ shows only the ‘strength’ of each of those relations, and says nothing about those relations themselves, the three-way links of Semantic Networks can be used for many kinds of reasoning.

Self-Models: Why did you include ‘Self-Models’ among the processes in your first diagram?

When Joan was thinking about what she had done, she asked herself, “What would my friends have thought of me.” But the only way she could answer such questions would be to use some descriptions or models that represent her friends and herself. Some of Joan’s models of herself will be descriptions of her physical body, others will represent some of her goals, and yet others depict her dispositions in various social and physical contexts. Eventually we build additional structures include collections of stories about our own pasts, ways to describe our mental states, bodies of knowledge about our capacities, and depiction of our acquaintances. Chapter §9 will further discuss how we make and use ‘models’ of ourselves.

Once Joan possesses a set of such models, she can use them to think self-reflectively—and she’ll feel that

Вы читаете The Emotion Machine
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату