familiar is good. Take, for example, an odd phenomenon known as the 'mere familiarity' effect: if you ask people to rate things like the characters in Chinese writing, they tend to prefer those that they have seen before to those they haven't. Another study, replicated in at least 12 different languages, showed that people have a surprising attachment to the
From the perspective of our ancestors, a bias in favor of the familiar may well have made sense; what great- great-great-grandma knew and didn't kill her was probably a safer bet than what she didn't know — which might do her in. Preference for the familiar may well have been adaptive in our ancestors, selected for in the usual ways: creatures with a taste for the well known may have had more offspring than creatures with too extreme a predilection for novelty. Likewise, our desire for comfort foods, presumably those most familiar to us, seems to increase in times of stress; again, it's easy to imagine an adaptive explanation.
In the domain of aesthetics, there's no real downside to preferring what I'm already used to — it doesn't really matter whether I like this Chinese character better than that one. Likewise, if my love of 1970s disco stems from mere familiarity rather than the exquisite musicianship of Donna Summer, so be it.
But our attachment to the familiar can be problematic too, especially when we don't recognize the extent to which it influences our putatively rational decision making. In fact, the repercussions can take on global significance. For example, people tend to prefer social policies that are already in place to those that are not, even if no well- founded data prove that the current policies are working. Rather than analyze the costs and benefits, people often use this simple heuristic: 'If it's in place, it must be working.'
One recent study suggested that people will do this even when they have no idea what policies are in place. A team of Israeli researchers decided to take advantage of the many policies and local ordinances that most people know little about. So little, in fact, that the experimenters could easily get the subjects to believe whatever they suggested; the researchers then tested how attached people had become to whatever 'truth' they had been led to believe in. For example, subjects were asked to evaluate policies such as the feeding of alley cats — should it be okay, or should it be illegal? The experimenter told half the subjects that alley-cat feeding was currently legal and the other half that it wasn't, and then asked people whether the policy should be changed. Most people favored whatever the current policy was and tended to generate more reasons to favor it over the competing policy. The researchers found similar results with made-up rules about arts-and-crafts instruction. (Should students have five hours of instruction or seven? The current policy is X.) The same sort of love-the-familiar reasoning applies, of course, in the real world, where the stakes are higher, which explains why incumbents are almost always at an advantage in an election. Even recently deceased incumbents have been known to beat their still-living opponents.*
The more we are threatened, the more we tend to cling to the familiar. Just think of the tendency to reach for comfort food. Other things being equal, people under threat tend to become more attached than usual to their own groups, causes, and values. Laboratory studies, for example, have shown that if you make people contemplate their own death ('Jot down, as specifically as you can, what you think will happen to
People may even come to love, or at least accept, systems of gov
*In March 2006, in Sierra Vista, Arizona, Bob Kasun, dead for nine days, won by a margin of nearly three to one.
ernment that profoundly threaten their self-interest. As the psychologist John Jost has noted, 'Many people who lived under feudalism, the Crusades, slavery, communism, apartheid, and the Taliban believed that their systems were imperfect but morally defensible and [even sometimes] better than the alternatives they could envision.' In short, mental contamination can be very serious business.
Each of these examples of mental contamination — the focusing illusion, the halo effect, anchoring and adjustment, and the familiarity effect — underscores an important distinction that will recur throughout this book: as a rough guide, our thinking can be divided into two streams, one that is fast, automatic, and largely unconscious, and another that is slow, deliberate, and judicious.
The former stream, which I will refer to as the ancestral system, or the reflexive system, seems to do its thing rapidly and automatically, with or without our conscious awareness. The latter stream I will call the deliberative system, because that's what it does: it deliberates, it considers, it chews over the facts — and tries (sometimes successfully, sometimes not) to reason with them.
The reflexive system is clearly older, found in some form in virtually every multicellular organism. It underlies many of our everyday actions, such as the automatic adjustment of our gait as we walk up and down an uneven surface, or our rapid recognition of an old friend. The deliberative system, which consciously considers the logic of our goals and choices, is a lot newer, found in only a handful of species, perhaps only humans.
As best we can tell, the two systems rely on fairly different neural substrates. Some of the reflexive system depends on evolutionarily old brain systems like the cerebellum and basal ganglia (implicated in motor control) and the amygdala (implicated in emotion). The deliberative system, meanwhile, seems to be based primarily in the forebrain, in the prefrontal cortex, which is present — but vastly smaller — in other mammals.
I describe the latter system as 'deliberative' rather than, say, rational because there is no guarantee that the deliberative system will deliberate in genuinely rational ways. Although this system can, in principle, be quite clever, it often settles for reasoning that is less than ideal. In this respect, one might think the deliberative system as a bit like the Supreme Court: its decisions may not always seem sensible, but there's always at least an intention to be judicious.
Conversely, the reflexive system shouldn't be presumed irrational; it is certainly more shortsighted than the deliberative system, but it likely wouldn't exist at all if it were completely irrational. Most of the time, it does what it does well, even if (by definition) its decisions are not the product of careful thought. Similarly, although it might seem tempting, I would also caution against equating the reflexive system with emotions. Although many (such as fear) are arguably reflexive, emotions like schadenfreude — the delight one can take in a rival's pain — are not. Moreover, a great deal of the reflexive system has little if anything to do with emotion; when we instinctively grab a railing as we stumble on a staircase, our reflexive system is clearly what kicks in to save us — but it may do so entirely without emotion. The reflexive system (really, perhaps a set of systems) is about making snap judgments based on experience, emotional or otherwise, rather than feelings per se.
Even though the deliberative system is more sophisticated, the latest in evolutionary technology, it has never really gained proper control because it bases its decisions on almost invariably secondhand information, courtesy of