All the squares were of the same color except one. The participants were asked to press one of two buttons, depending on whether the odd square out was in the left half of the circle or in the right. In the picture above, the odd square out is roughly at eight o’clock, so the correct response would be to press the left button. The participants were given a series of such tasks, and in each one the odd one out changed color and position. Sometimes it was blue whereas the others were green, sometimes it was green but a different shade from all the other greens, sometimes it was green but the others were blue, and so on. As the task is simple, the participants generally pressed the correct button. But what was actually being measured was the time it took them to respond.

As expected, the speed of recognizing the odd square out depended principally on the objective distance between the shades. Regardless of whether it appeared on the left or on the right, participants were always quicker to respond the farther the shade of the odd one out was from the rest. But the startling result was a significant difference between the reaction patterns in the right and in the left visual fields. When the odd square out appeared on the right side of the screen, the half that is processed in the same hemisphere as language, the border between green and blue made a real difference: the average reaction time was significantly shorter when the odd square out was across the green-blue border from the rest. But when the odd square out was on the left side of the screen, the effect of the green-blue border was far weaker. In other words, the speed of the response was much less influenced by whether the odd square out was across the green-blue border from the rest or whether it was a different shade of the same color.

So the left half of English speakers’ brains showed the same response toward the blue-green border that Russian speakers displayed toward the siniy-goluboy border, whereas the right hemisphere showed only weak traces of a skewing effect. The results of this experiment (as well as a series of subsequent adaptations that have corroborated its basic conclusions) leave little room for doubt that the color concepts of our mother tongue interfere directly in the processing of color. Short of actually scanning the brain, the two-hemisphere experiment provides the most direct evidence so far of the influence of language on visual perception.

Short of scanning the brain? A group of researchers from the University of Hong Kong saw no reason to fall short of that. In 2008, they published the results of a similar experiment, only with a little twist. As before, the recognition task involved staring at a computer screen, recognizing colors, and pressing one of two buttons. The difference was that the doughty participants were asked to complete this task while lying in the tube of an MRI scanner. MRI, or magnetic resonance imaging, is a technique that produces online scans of the brain by measuring the level of blood flow in its different regions. Since increased blood flow corresponds to increased neural activity, the MRI scanner measures (albeit indirectly) the level of neural activity in any point of the brain.

In this experiment, the mother tongue of the participants was Mandarin Chinese. Six different colors were used: three of them (red, green, and blue) have common and simple names in Mandarin, while three other colors do not (see figure 10). The task was very simple: the participants were shown two squares on the screen for a split second, and all they had to do was indicate by pressing a button whether the two squares were identical in color or not.

The task did not involve language in any way. It was again a purely visual-motoric exercise. But the researchers wanted to see if language areas of the brain would nevertheless be activated. They assumed that linguistic circuits would more likely get involved with the visual task if the colors shown had common and simple names than if there were no obvious labels for them. And indeed, two specific small areas in the cerebral cortex of the left hemisphere were activated when the colors were from the easy-to-name group but remained inactive when the colors were from the difficult-to-name group.

To determine the function of these two left-hemisphere areas more accurately, the researchers administered a second task to the participants, this time explicitly language-related. The participants were shown colors on the screen, and while their brains were being scanned they were asked to say aloud what each color was called. The two areas that had been active earlier only with the easy-to-name colors now lit up as being heavily active. So the researchers concluded that the two specific areas in question must house the linguistic circuits responsible for finding color names.

If we project the function of these two areas back to the results of the first (purely visual) task, it becomes clear that when the brain has to decide whether two colors look the same or not, the circuits responsible for visual perception ask the language circuits for help in making the decision, even if no speaking is involved. So for the first time, there is now direct neurophysiologic evidence that areas of the brain that are specifically responsible for name finding are involved with the processing of purely visual color information.

In the light of the experiments reported in this chapter, color may be the area that comes closest in reality to the metaphor of language as a lens. Of course, language is not a physical lens and does not affect the photons that reach the eye. But the sensation of color is produced in the brain, not the eye, and the brain does not take the signals from the retina at face value, as it is constantly engaged in a highly complex process of normalization, which creates an illusion of stable colors under different lighting conditions. The brain achieves this “instant fix” effect by shifting and stretching the signals from the retina, by exaggerating some differences while playing down others. No one knows exactly how the brain does all this, but what is clear is that it relies on past memories and on stored impressions. It has been shown, for instance, that a perfectly gray picture of a banana can appear slightly yellow to us, because the brain remembers bananas as yellow and so normalizes the sensation toward what it expects to see. (For further details, see the appendix.)

It is likely that the involvement of language with the perception of color takes place on this level of normalization and compensation, where the brain relies on its store of past memories and established distinctions in order to decide how similar certain colors are. And although no one knows yet what exactly goes on between the linguistic and the visual circuits, the evidence gathered so far amounts to a compelling argument that language does affect our visual sensation. In Kay and Kempton’s top-down experiment from 1984, English speakers insisted that shades across the green-blue border looked farther apart to them. The bottom-up approach of more recent experiments shows that the linguistic concepts of color are directly involved in the processing of visual information, and that they make people react to colors of different names as if these were farther apart than they are objectively. Taken together, these results lead to a conclusion that few would have been prepared to believe just a few years ago: that speakers of different languages may perceive colors slightly differently after all.

In one sense, therefore, the color odyssey that Gladstone launched in 1858 has ended up, after a century and a half of peregrination, within spitting distance of his starting point. For in the end, it may well be that the Greeks did perceive colors slightly differently from us. But even if we have concluded the journey staring Gladstone right in the face, we are not entirely seeing eye to eye with him, because we have turned his story on its head and have reversed the direction of cause and effect in the relation between language and perception. Gladstone assumed that the difference between Homer’s color vocabulary and ours was a result of preexisting differences in color perception. But it now seems that the vocabulary of color in different languages can be the cause of differences in the perception of color. Gladstone thought that Homer’s unrefined color vocabulary was a reflection of the undeveloped state of his eye’s anatomy. We know that nothing has changed in the eye’s anatomy over the last millennia, and yet the habits of mind instilled by our more refined color vocabulary may have made us more sensitive to some fine color distinctions nonetheless.

More generally, the explanation for cognitive differences between ethnic groups has shifted over the last two centuries, from anatomy to culture. In the nineteenth century, it was generally assumed that there were significant inequalities between the hereditary mental faculties of different races, and that these biological inequalities were the main reason for their varying accomplishments. One of the jewels in the crown of the twentieth century was the recognition of the fundamental unity of mankind in all that concerns its cognitive endowment. So nowadays we no longer look primarily to the genes to explain variations in mental characteristics among ethnic groups. But in the twenty-first century, we are beginning to appreciate the differences in thinking that are imprinted by cultural conventions and, in particular, by speaking in different tongues.

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату