gardens you were promised, rosy lips and cheeks, thorns, rose-colored glasses, and so on. Doesn’t this imply that many far-flung regions of the brain must cooperate to generate the concept of a rose? Surely the word is just the handle, or focus, around which swirls a halo of associations, meanings, and memories.

There’s probably some truth to this, but the evidence from aphasics such as Dr. Hamdi suggests the very opposite—that the brain has neural circuits specialized for language. Indeed, it may even be that separate components or stages of language processing are dealt with by different parts of the brain, although we should really think of them as parts of one large interconnected system. We are accustomed to thinking of language as a single function, but this is an illusion. Vision feels like a unitary faculty to us as well, yet as noted in Chapter 2, seeing relies on numerous quasi-independent areas. Language is similar. A sentence, loosely speaking, has three distinct components, which are normally so closely interwoven that they don’t feel separate. First, there are the building blocks we call words (lexicon) that denote objects, actions, and events. Second, there is the actual meaning (semantics) conveyed by the sentence. And third, there is syntactic structure (loosely speaking, grammar), which involves the use of function words and recursion. The rules of syntax generate the complex hierarchical phrase structure of human language, which at its core allows the unambiguous communication of fine nuances of meaning and intention.

Human beings are the only creatures to have true language. Even chimps, who can be trained to sign simple sentences like “Give me fruit,” can’t come close to complex sentences such as “It’s true that Joe is the big alpha male, but he’s starting to get old and lazy, so don’t worry about what he might do unless he seems to be in an especially nasty mood.” The seemingly infinite flexibility and open-endedness of our language is one of the hallmarks of the human species. In ordinary speech, meaning and syntactic structure are so closely intertwined that it’s hard to believe that they are really distinct. But you can have a perfectly grammatical sentence that is meaningless gibberish, as in the linguist Noam Chomsky’s famous example, “Colorless green ideas sleep furiously.” Conversely, a meaningful idea can be conveyed adequately by a nongrammatical sentence, as Dr. Hamdi has shown us. (“It’s difficult, ummm, left side perfectly okay.”)

It turns out that different parts of the brain are specialized for these three different aspects of language: lexicon, semantics, and syntax. But the agreement among researchers ends there. The degree of specialization is hotly debated. Language, more than any other topic, tends to polarize academics. I don’t quite know why, but fortunately it isn’t my field. In any case, by most accounts Broca’s area seems mainly concerned with syntactic structure. So Dr. Hamdi had no better chance than a chimp of generating long sentences full of hypotheticals and subordinate clauses. Yet he had no difficulty in communicating his ideas by just stringing words together in approximately the right order, like Tarzan. (Or surfer dudes in California.)

One reason for thinking that Broca’s area is specialized exclusively for syntactic structure is the observation that it seems to have a life of its own, quite independent of the meaning conveyed. It’s almost as though this patch of cortex has an autonomous set of grammatical rules that are intrinsic to its networks. Some of them seem quite arbitrary and apparently nonfunctional, which is the main reason linguists assert its independence from semantics and meaning and dislike thinking of it as having evolved from anything else in the brain. The extreme view is exemplified by Chomsky, who believes that it didn’t even evolve through natural selection!

The brain region concerned with semantics is located in the left temporal lobe near the back of the great horizontal cleft in the middle of the brain (see Figure 6.1). This region, called Wernicke’s area, appears to be specialized for the representation of meaning. Dr. Hamdi’s Wernicke’s area was obviously intact. He could still comprehend what was said to him and could convey some semblance of meaning in his conversations. Conversely, Wernicke’s aphasia—what you get if your Wernicke’s area is damaged but your Broca’s area remains intact—is in a sense the mirror image of Broca’s aphasia: The patient can fluently generate elaborate, smoothly articulated, grammatically flawless sentences, but it’s all meaningless gibberish. At least that’s the official party line, but later I’ll provide evidence that this isn’t entirely true.

THESE BASIC FACTS about the major language-related brain areas have been known for more than a century. But many questions remain. How complete is the specialization? How does the neural circuitry within each area actually do its job? How autonomous are these areas, and how do they interact to generate smoothly articulated, meaningful sentences? How does language interact with thought? Does language enable us to think, or does thinking enable us to talk? Can we think in a sophisticated manner without silent internal speech? And lastly, how did this extraordinarily complex, multicomponent system originally come into existence in our hominin ancestors?

This last question is the most vexing. Our journey into full-blown humanity began with nothing but the primitive growls, grunts, and groans available to our primate cousins. By 75,000 to 150,000 years ago, the human brain was brimming with complex thoughts and linguistic skills. How did this happen? Clearly, there must have been a transitional phase, yet it’s hard to imagine how linguistic brain structures of intermediate complexity might have worked, or what functions they might have served along the way. The transitional phase must have been at least partially functional; otherwise it couldn’t have been selected for, nor served as an evolutionary bridge for the eventual emergence of more sophisticated language functions.

To understand what this bridge might have been is the main purpose of this chapter. I should point out that by “language” I don’t mean just “communication.” We often use the two words interchangeably, but in fact they are very different. Consider the vervet monkey. Vervets have three alarm calls to alert each other about predators. The call for leopard prompts the troupe to bolt for the nearest trees. The call for serpent causes the monkeys to stand up on two legs and peer down into the grass. And when vervets hear the eagle call, they look up into the air and seek shelter in the underbrush. It’s tempting to conclude that these calls are like words, or at least the precursors to words, and that the monkey does have a primitive vocabulary of sorts. But do the monkeys really know there’s a leopard, or do they just rush for the nearest tree reflexively when an alarm call is sounded? Or perhaps the call really just means “climb” or “there’s danger on the ground,” rather than the much richer concept of leopard that a human brain harbors. This example tells us that mere communication isn’t language. Like an air-raid siren or a fire alarm, vervets’ cries are generalized alerts that refer to specific situations; they are almost nothing like words.

In fact, we can list a set of five characteristics that make human language unique and radically different from other types of communication we see in vervets or dolphins:

1. Our vocabulary (lexicon) is enormous. By the time a child is eight years old, she has almost six hundred words at her disposal—a figure that vastly exceeds the nearest runner-up, the vervet monkey, by two orders of magnitude. One could argue, though, that this is really a matter of degree than a qualitative jump; maybe we just have much better memories.

2. More important than the sheer size of our lexicon is the fact that only humans have function words that exist exclusively in the context of language. While words like “dog,” “night,” or “naughty” refer to actual things or events, function words have no existence independent of their linguistic function. So even though a sentence such as “If gulmpuk is buga, then gadul will be too” is meaningless, we do understand the conditional nature of the statement because of the conventional usage of “if” and “then.”

3. Humans can use words “off-line,” that is, to refer to things or events that are not currently visible or exist only in the past, the future, or a hypothetical reality: “I saw an apple on the tree yesterday, and decided I will pluck it tomorrow but only if it is ripe.” This type of complexity isn’t found in most spontaneous forms of animal communication. (Apes who are taught sign language can, of course, use signs in the absence of the object being referred to. For example, they can sign “banana” when hungry.)

Вы читаете The Tell-Tale Brain
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату