what other people did as well as we recall what we did ourselves — which leaves everybody (even shirkers!) feeling that others have taken advantage of them. Realizing the limits of our own data sampling might make us all a lot more generous.

Mental contamination is so potent that even entirely irrelevant information can lead us by the nose. In one pioneering experiment, the psychologists Amos Tversky and Daniel Kahneman spun a wheel of fortune, marked with the numbers 1-100, and then asked their subjects a question that had nothing to do with the outcome of spinning the wheel: what percentage of African countries are in the United Nations? Most participants didn’t know for sure, so they had to estimate — fair enough. But their estimates were considerably affected by the number on the wheel. When the wheel registered 10, a typical response to the UN question was 25 percent, whereas when the wheel came up at 65, a typical response was 45 percent.[15]

This phenomenon, which has come to be known as “anchoring and adjustment,” occurs again and again. Try this one: Add 400 to the last three digits of your cell phone number. When you’re done, answer the following question: in what year did Attila the Hun’s rampage through Europe finally come to an end? The average guess of people whose phone number, plus 400, yields a sum less than 600 was A.D. 629, whereas the average guess of people whose phone number digits plus 400 came in between 1,200 and 1,399 was A.D. 979, 350 years later.[16]

What’s going on here? Why should a phone number or a spin on a wheel of fortune influence a belief about history or the composition of the UN? During the process of anchoring and adjustment, people begin at some arbitrary starting point and keep moving until they find an answer they like. If the number 10 pops up on the wheel, people start by asking themselves, perhaps unconsciously, “Is 10 a plausible answer to the UN question?” If not, they work their way up until they find a value (say, 25) that seems plausible. If 65 comes up, they may head in the opposite direction: “Is 65 a plausible answer? How about 55?” The trouble is, anchoring at a single arbitrarily chosen point can steer us toward answers that are just barely plausible: starting low leads people to the lowest plausible answer, but starting high leads them to the highest plausible answer. Neither strategy directs people to what might be the most sensible response — one in the middle of the range of plausible answers. If you think that the correct answer is somewhere between 25 and 45, why say 25 or 45? You’re probably better off guessing 35, but the psychology of anchoring means that people rarely do.

Anchoring has gotten a considerable amount of attention in psychological literature, but it’s by no means the only illustration of how beliefs and judgments can be contaminated by peripheral or even irrelevant information. To take another example, people who are asked to hold a pen between their teeth gently, without letting it touch their lips, rate cartoons as more enjoyable than do people who hold a pen with pursed lips. Why should that be? You can get a hint if you try following these instructions while looking in a mirror: Hold a pen between your teeth “gently, without letting it touch the lips.” Now look at the shape of your lips. You’ll see that the corners are upturned, in the position of a smile. And thus, through the force of context-dependent memory, upturned lips tend to automatically lead to happy thoughts.

A similar line of experiments asked people to use their non-dominant hand (the left, for right-handed people) to write down names of celebrities as fast as they could while classifying them into categories (like, don’t like, neutral). They had to do this while either (1) pressing their dominant hand, palm down, against the top of a table or (2) pushing their dominant hand, palm upward, against the bottom of a table. Palms-up people listed more positive than negative names, while palms-down people produced more negative names than positive. Why? Palms-up people were positioned in a positive “approach” posture while palms-down people were positioned in an “avoid” posture. The data show that such subtle differences routinely affect our memories and, ultimately, our beliefs.

Another source of contamination is a kind of mental shortcut, the human tendency to believe that what is familiar is good. Take, for example, an odd phenomenon known as the “mere familiarity” effect: if you ask people to rate things like the characters in Chinese writing, they tend to prefer those that they have seen before to those they haven’t. Another study, replicated in at least 12 different languages, showed that people have a surprising attachment to the letters found in their own names, preferring words that contain those letters to words that don’t. One colleague of mine has even suggested, somewhat scandalously, that people may love famous paintings as much for their familiarity as for their beauty.

From the perspective of our ancestors, a bias in favor of the familiar may well have made sense; what great-great-great-grandma knew and didn’t kill her was probably a safer bet than what she didn’t know — which might do her in. Preference for the familiar may well have been adaptive in our ancestors, selected for in the usual ways: creatures with a taste for the well known may have had more offspring than creatures with too extreme a predilection for novelty. Likewise, our desire for comfort foods, presumably those most familiar to us, seems to increase in times of stress; again, it’s easy to imagine an adaptive explanation.

In the domain of aesthetics, there’s no real downside to preferring what I’m already used to — it doesn’t really matter whether I like this Chinese character better than that one. Likewise, if my love of 1970s disco stems from mere familiarity rather than the exquisite musicianship of Donna Summer, so be it.

But our attachment to the familiar can be problematic too, especially when we don’t recognize the extent to which it influences our putatively rational decision making. In fact, the repercussions can take on global significance. For example, people tend to prefer social policies that are already in place to those that are not, even if no well- founded data prove that the current policies are working. Rather than analyze the costs and benefits, people often use this simple heuristic: “If it’s in place, it must be working.”

One recent study suggested that people will do this even when they have no idea what policies are in place. A team of Israeli researchers decided to take advantage of the many policies and local ordinances that most people know little about. So little, in fact, that the experimenters could easily get the subjects to believe whatever they suggested; the researchers then tested how attached people had become to whatever “truth” they had been led to believe in. For example, subjects were asked to evaluate policies such as the feeding of alley cats — should it be okay, or should it be illegal? The experimenter told half the subjects that alley-cat feeding was currently legal and the other half that it wasn’t, and then asked people whether the policy should be changed. Most people favored whatever the current policy was and tended to generate more reasons to favor it over the competing policy. The researchers found similar results with made-up rules about arts-and-crafts instruction. (Should students have five hours of instruction or seven? The current policy is X.) The same sort of love-the-familiar reasoning applies, of course, in the real world, where the stakes are higher, which explains why incumbents are almost always at an advantage in an election. Even recently deceased incumbents have been known to beat their still-living opponents.[17]

The more we are threatened, the more we tend to cling to the familiar. Just think of the tendency to reach for comfort food. Other things being equal, people under threat tend to become more attached than usual to their own groups, causes, and values. Laboratory studies, for example, have shown that if you make people contemplate their own death (“Jot down, as specifically as you can, what you think will happen to you as you physically die…”) , they tend to be nicer than normal to members of their own religious and ethnic groups, but more negative toward outsiders. Fears of death also tend to polarize people’s political and religious beliefs: patriotic Americans who are made aware of their own mortality are more appalled (than patriots in a control group) by the idea of using the American flag as a sieve; devout Christians who are asked to reflect upon their own death are less tolerant of someone using a crucifix as a substitute hammer. (Charities, take note: we also open up our wallets more when we’ve just thought about death.) Another study has shown that all people tend to become more negative toward minority groups in times of crisis; oddly enough, this holds true not just for members of the majority but even for members of the minority groups themselves.

People may even come to love, or at least accept, systems of government that profoundly threaten their self- interest. As the psychologist John Jost has noted, “Many people who lived under feudalism, the Crusades, slavery, communism, apartheid, and the Taliban believed that their systems were imperfect but morally defensible and [even sometimes] better than the alternatives they could envision.” In short, mental contamination can be very serious business.

Each of these examples of mental contamination — the focusing illusion, the halo effect, anchoring and adjustment, and the familiarity effect — underscores an important distinction that will recur throughout this book: as a rough guide, our thinking can be divided into two streams, one that is fast, automatic, and largely unconscious,

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×