unnecessary labor, and has the ability to make judgments towards those ends.”

At first glance, this assumption seem awfully reasonable. Who among us isn’t self-interested? And who wouldn’t avoid unnecessary labor, given the chance? (Why clean your apartment unless you know that guests are coming?)

But as the architect Mies van der Rohe famously said, “God is in the details.” We are indeed good at dodging unnecessary labor, but true rationality is an awfully high standard, frequently well beyond our grasp. To be truly rational, we would need, at a minimum, to face each decision with clear eyes, uncontaminated by the lust of the moment, prepared to make every decision with appropriately dispassionate views of the relevant costs and benefits. Alas, as we’ll see in a moment, the weight of the evidence from psychology and neuroscience suggests otherwise. We can be rational on a good day, but much of the time we are not.

Appreciating what we as a species can and can’t do well — when we are likely to make sound decisions and when we are likely to make a hash of them — requires moving past the idealization of economic man and into the more sticky territory of human psychology. To see why some of our choices appear perfectly sensible and others perfectly foolish, we need to understand how our capacity for choice evolved.

I’ll start with good news. On occasion, human choices can be entirely rational. Two professors at NYU, for example, studied what one might think of as the world’s simplest touch-screen video game — and found that, within the parameters of that simple task, people were almost as rational (in the sense of maximizing reward relative to risk) as you could possibly imagine. Two targets appear (standing still) on a screen, one green, one red. In this task, you get points if you touch the green circle; you lose a larger number of points if you touch the red one. The challenge comes when the circles overlap, as they often do, and if you touch the intersection between the circles, you get both the reward and the (larger) penalty, thus accruing a net loss. Because people are encouraged to touch the screen quickly, and because nobody’s hand-eye coordination is perfect, the optimal thing to do is to point somewhere other than the center of the green circle. For example, if the green circle overlaps but is to the right of the red circle, pointing to the center of the green circle is risky business: an effort to point at the exact center of the green circle will sometimes wind up off target, left of center, smack in the point- losing region where the green and red circles overlap. Instead, it makes more sense to point somewhere to the right of the center of the green circle, keeping the probability of hitting the green circle high, while minimizing the probability of hitting the red circle. Somehow people figure all this out, though not necessarily in an explicit or conscious fashion. Even more remarkably, they do so in a manner that is almost perfectly calibrated to the specific accuracy of their own individual system of hand-eye coordination. Adam Smith couldn’t have asked for more.

The bad news is that such exquisite rationality may well be the exception rather than the rule. People are as good as they are at the pointing-at-circles task because it draws on a mental capacity — the ability to reach for things — that is truly ancient. Reaching is close to a reflex, not just for humans, but for every animal that grabs a meal to bring it closer to its mouth; by the time we are adults, our reaching system is so well tuned, we never even think about it. For instance, in a strict technical sense, every time I reach for my cup of tea, I make a set of choices. I decide that I want the tea, that the potential pleasure and the hydration offered by the beverage outweigh the risk of spillage. More than that, and even less consciously, I decide at what angle to send my hand. Should I use my left hand (which is closer) or my right hand (which is better coordinated)? Should I grab the cylindrical central portion of the mug (which holds the contents that I really want) or go instead for the handle, a less direct but easier-to-grasp means to the tea that is inside? Our hands and muscles align themselves automatically, my fingers forming a pincer grip, my elbow rotating so that my hand is in perfect position. Reaching, central to life, involves many decisions, and evolution has had a long time to get them just right.

But economics is not supposed to be a theory of how people reach for coffee mugs; it’s supposed be a theory of how they spend their money, allocate their time, plan for their retirement, and so forth — it’s supposed to be, at least in part, a theory about how people make conscious decisions.

And often, the closer we get to conscious decision making, a more recent product of evolution, the worse our decisions become. When the NYU professors reworked their grasping task to make it a more explicit word problem, most subjects’ performance fell to pieces. Our more recently evolved deliberative system is, in this particular respect, no match for our ancient system for muscle control. Outside that rarefied domain, there are loads of circumstances in which human performance predictably defies any reasonable notion of rationality.

Suppose, for example, that I give you a choice between participating in two lotteries. In one lottery, you have an 89 percent chance of winning $1 million, a 10 percent chance of winning $5 million, and a 1 percent chance of winning nothing; in the other, you have a 100 percent chance of winning $1 million. Which do you go for? Almost everyone takes the sure thing.

Now suppose instead your choice is slightly more complicated. You can take either an 11 percent chance at $1 million or a 10 percent chance of winning $5 million. Which do you choose? Here, almost everyone goes for the second choice, a 10 percent shot at $5 million.

What would be the rational thing to do? According to the theory of rational choice, you should calculate your “expected utility,” or expected gain, essentially averaging the amount you would win across all the possible outcomes, weighted by their probability. An 11 percent chance at $1 million works out to an expected gain of $110,000; 10 percent at $5 million works out to an expected gain of $500,000, clearly the better choice. So far, so good. But when you apply the same logic to the first set of choices, you discover that people behave far less rationally. The expected gain in the lottery that is split 89 percent/ 10 percent/i percent is $1,390,000 (89 percent of $1 million plus 10 percent of $5 million plus 1 percent of $0), compared to a mere million for the sure thing. Yet nearly everyone goes for the million bucks — leaving close to half a million dollars on the table. Pure insanity from the perspective of “rational choice.”

Another experiment offered undergraduates a choice between two raffle tickets, one with 1 chance in 100 to win a $500 voucher toward a trip to Paris, the other, 1 chance in 100 to win a $500 voucher toward college tuition. Most people, in this case, prefer Paris. No big problem there; if Paris is more appealing than the bursar’s office, so be it. But when the odds increase from 1 in 100 to 99 out of 100, most people’s preferences reverse; given the near certainty of winning, most students suddenly go for the tuition voucher rather than the trip — sheer lunacy, if they’d really rather go to Paris.

To take an entirely different sort of illustration, consider the simple question I posed in the opening chapter: would you drive across town to save $25 on a $100 microwave? Most people would say yes, but hardly anybody would drive across town to save the same $25 on a $1,000 television. From the perspective of an economist, this sort of thinking too is irrational. Whether the drive is worth it should depend on just two things: the value of your time and the cost of the gas, nothing else. Either the value of your time and gas is less than $25, in which case you should make the drive, or your time and gas are worth more than $25, in which case you shouldn’t make the drive — end of story. Since the labor to drive across town is the same in both cases and the monetary amount is the same, there’s no rational reason why the drive would make sense in one instance and not the other.

On the other hand, to anyone who hasn’t taken a class in economics, saving $25 on $100 seems like a good deal (“I saved 25 percent!”), whereas saving $25 on $1,000 appears to be a stupid waste of time (“You drove all the way across town to get 2.5 percent off? You must have nothing better to do”). In the clear-eyed arithmetic of the economist, a dollar is a dollar is a dollar, but most ordinary people can’t help but think about money in a somewhat less rational way: not in absolute terms, but in relative terms.

What leads us to think about money in (less rational) relative terms rather than (more rational) absolute terms?

To start with, humans didn’t evolve to think about numbers, much less money, at all. Neither money nor numerical systems are omnipresent. Some cultures trade only by means of barter, and some have simple counting systems with only a few numerical terms, such as one, two, many. Clearly, both counting systems and money are cultural inventions. On the other hand, all vertebrate animals are built with what some psychologists call an “approximate system” for numbers, such that they can distinguish more from less. And that system in turn has the peculiar property of being “nonlinear”: the difference between 1 and 2 subjectively seems greater than the difference between 101 and 102. Much of the brain is built on this principle, known as Weber’s law. Thus, a 150-watt light bulb seems only a bit brighter than a 100-watt bulb, whereas a 100-watt bulb seems much brighter than a 50-watt bulb.

In some domains, following Weber’s law makes a certain amount of sense: a storehouse of an extra 2 kilos of

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×