the less-than-objective ancestral system. We can reason as carefully as we like, but, as they say in computer science jargon, 'garbage in, garbage out.' There's no guarantee that the ancestral system will pass along a balanced set of data. Worse, when we are stressed, tired, or distracted, our deliberative system tends to be the first thing to go, leaving us at the mercy of our lower-tech reflexive system — just when we might need our deliberative system the most.

The unconscious influence of our ancestral system is so strong that when our conscious mind tries to get control of the situation, the effort sometimes backfires. For example, in one study, people were put under time pressure and asked to make rapid judgments. Those who were told to (deliberately) suppress sexist thoughts (themselves presumably the product of the ancestral reflexive system) actually became more likely than control subjects to have sexist thoughts. Even more pernicious is the fact that as evolution layered reason on top of contextually driven memory, it left us with the illusion of objectivity. Evolution gave us the tools to deliberate and reason, but it didn't give us any guarantee that we'd be able to use them without interference. We feel as if our beliefs are based on cold, hard facts, but often they are shaped by our ancestral system in subtle ways that we are not even aware of.

No matter what we humans think about, we tend to pay more attention to stuff that fits in with our beliefs than stuff that might challenge them. Psychologists call this 'confirmation bias.' When we have embraced a theory, large or small, we tend to be better at noticing evidence that supports it than evidence that might run counter to it.

Consider the quasi-astrological description that opened this chapter. A person who wants to believe in astrology might notice the parts that seem true ('you have a need for other people to like and admire you') and ignore the parts that aren't (maybe from the outside you don't really look so disciplined after all). A person who wishes to believe in horoscopes may notice the one time that their reading seems dead-on and ignore (or rationalize) the thousands of times when their horoscopes are worded so ambiguously that they could mean anything. That's confirmation bias.

Take, for example, an early experiment conducted by the British psychologist Peter Wason. Wason presented his subjects with a triplet of three distinct numbers (for example, 2-4-6) and asked them to guess what rule might have generated their arrangement. Subjects were then asked to create new sequences and received feedback as to whether their new sequences conformed to the rule. A typical subject might guess '4-6-8,' be told yes, and proceed to try '8-10-12' and again be told yes; the subject might then conclude that the rule was something like 'sequences of three even numbers with two added each time.' What most people failed to do, however, was consider potentially disconfirming evidence. For example, was 1-3-5 or 1-3-4 a valid sequence? Few subjects bothered to ask; as a consequence, hardly anybody guessed that the actual rule was simply 'any sequence of three ascending numbers.' Put more generally, people all too often look for cases that confirm their theories rather than consider whether some alternative principle might work better.

In another, later study, less benign, two different groups of people saw a videotape of a child taking an academic test. One group of viewers was led to believe that the child came from a socioeconomically privileged background, the other to believe that the child came from a socioeconomically impoverished background. Those who thought the child was wealthier reported that the child was doing well and performing above grade level; the other group guessed that the child was performing below grade level.

Confirmation bias might be an inevitable consequence of contextually driven memory. Because we retrieve memory not by systematically searching for all relevant data (as computers do) but by finding things that match, we can't help but be better at noticing things that confirm the notions we begin with. When you think about the O. J. Simpson murder trial, if you were predisposed to think he was guilty, you're likely to find it easier to remember evidence that pointed toward his guilt (his motive, the DNA evidence, the lack of other plausible suspects) rather than evidence that cast doubt on it (the shoddy police work and that infamous glove that didn't fit).

To consider something well, of course, is to evaluate both sides of an argument, but unless we go the extra mile of deliberately foreing ourselves to consider alternatives — not something that comes naturally — we are more prone to recall evidence consistent with an accepted proposition than evidence inconsistent with it. And since we most clearly remember information that seems consistent with our beliefs, it becomes very hard to let those beliefs go, even when they are erroneous.

The same, of course, goes for scientists. The aim of science is to take a balanced approach to evidence, but scientists are human beings, and human beings can't help but notice evidence that confirms their own theories. Read any science texts from the past and you will stumble on not only geniuses, but also people who in hindsight seem like crackpots — flat-earthers, alchemists, and so forth. History is not kind to scientists who believed in such fictions, but a realist might recognize that in a species so dependent on memory driven by context, such slip-ups are always a risk.

In 1913 Eleanor Porter wrote one of the more influential children's novels of the twentieth century, Pollyanna, a story of a girl who looked on the bright side of every situation. Over time, the name Pollyanna has become a commonly used term with two different connotations. It's used in a positive way to describe eternal optimists and in a negative way to describe people whose optimism exceeds the rational bounds of reality. Pollyanna may have been a fictional character, but there's a little bit of her in all of us, a tendency to perceive the world in positive ways that may or may not match reality. Generals and presidents fight on in wars that can't be won, and scientists retain beliefs in pet theories long after the weight of evidence is stacked against them.

Consider the following study, conducted by the late Ziva Kunda. A group of subjects comes into the lab. They are told they'll be playing a trivia game; before they play, they get to watch someone else, who, they are told, will play either on their team (half the subjects hear this) or on the opposite team (that's what the other half are told). Unbeknownst to the subjects, the game is rigged; the person they're watching proceeds to play a perfect game, getting every question right. The researchers want to know whether each subject is impressed by this. The result is straight out of Pollyanna: people who expect to play with the perfect-game-playing confederate are impressed; the guy must be great, they think. People who expect to play against the confederate are dismissive; they attribute his good performance to luck rather than skill. Same data, different interpretation: both groups of subjects observe someone play a perfect game, but what they make of that observation depends on the role they expect the observed man to play in their own life.

In a similar study, a bunch of college students viewed videos of three people having a conversation; they were asked to judge how likable each of the three was. The subjects were also told (prior to watching the video) that they would be going out on a date with one of those three people (selected at random for each subject). Inevitably, subjects tended to give their highest rating to the person they were told they would be dating — another illustration of how easily our beliefs (in this case, about someone's likability) can be contaminated by what we wish to believe. In the words of a musical I loved as a child, Harry Nilsson's The Point!, 'You see what you want to see, and you hear want you want to hear. Dig?'

Our tendency to accept what we wish to believe (what we are motivated to believe) with much less scrutiny than what we don't want to believe is a bias known as 'motivated reasoning,' a kind of flip side to confirmation bias. Whereas confirmation bias is an automatic tendency to notice data that fit with our beliefs, motivated reasoning is the complementary tendency to scrutinize ideas more carefully if we don't like them than if we do. Take, for example, a study in which Kunda asked subjects, half men, half women, to read an article claiming that caffeine

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату