meal to bring it closer to its mouth; by the time we are adults, our reaching system is so well tuned, we never even think about it. For instance, in a strict technical sense, every time I reach for my cup of tea, I make a set of choices. I decide that I want the tea, that the potential pleasure and the hydration offered by the beverage outweigh the risk of spillage. More than that, and even less consciously, I
But economics is not supposed to be a theory of how people reach for coffee mugs; it's supposed be a theory of how they spend their money, allocate their time, plan for their retirement, and so forth — it's supposed to be, at least in part, a theory about how people make
And often, the closer we get to conscious decision making, a more recent product of evolution, the worse our decisions become. When the NYU professors reworked their grasping task to make it a more explicit word problem, most subjects' performance fell to pieces. Our more recently evolved deliberative system is, in this particular respect, no match for our ancient system for muscle control. Outside that rarefied domain, there are loads of circumstances in which human performance predictably defies any reasonable notion of rationality.
Suppose, for example, that I give you a choice between participating in two lotteries. In one lottery, you have an 89 percent chance of winning $1 million, a 10 percent chance of winning $5 million, and a 1 percent chance of winning nothing; in the other, you have a 100 percent chance of winning $1 million. Which do you go for? Almost everyone takes the sure thing.
Now suppose instead your choice is slightly more complicated. You can take either an 11 percent chance at $1 million or a 10 percent chance of winning $5 million. Which do you choose? Here, almost everyone goes for the second choice, a 10 percent shot at $5 million.
What would be the rational thing to do? According to the theory of rational choice, you
— leaving close to half a million dollars on the table. Pure insanity from the perspective of 'rational choice.'
Another experiment offered undergraduates a choice between two raffle tickets, one with 1 chance in 100 to win a $500 voucher toward a trip to Paris, the other, 1 chance in 100 to win a $500 voucher toward college tuition. Most people, in this case, prefer Paris. No big problem there; if Paris is more appealing than the bursar's office, so be it. But when the odds increase from 1 in 100 to 99 out of 100, most people's preferences
To take an entirely different sort of illustration, consider the simple question I posed in the opening chapter: would you drive across town to save $25 on a $100 microwave? Most people would say yes, but hardly anybody would drive across town to save the same $25 on a $1,000 television. From the perspective of an economist, this sort of thinking too is irrational. Whether the drive is worth it should depend on just two things: the value of your time and the cost of the gas, nothing else. Either the value of your time and gas is less than $25, in which case you should make the drive, or your time and gas are worth more than $25, in which case you shouldn't make the drive — end of story. Since the labor to drive across town is the same in both cases and the monetary amount is the same, there's no rational reason why the drive would make sense in one instance and not the other.
On the other hand, to anyone who
What leads us to think about money in (less rational) relative terms rather than (more rational) absolute terms?
To start with, humans didn't evolve to think about numbers, much less money, at all. Neither money nor numerical systems are omnipresent. Some cultures trade only by means of barter, and some have simple counting systems with only a few numerical terms, such as
In some domains, following Weber's law makes a certain amount of sense: a storehouse of an extra 2 kilos of wheat relative to a baseline of 100 kilos isn't going to matter if everything after the first kilos ultimately spoils; what really matters is the difference between starving and not starving. Of course, money doesn't rot (except in times of hyperinflation), but our brain didn't evolve to cope with money; it evolved to cope with
And so even today, there's some remarkable crosstalk between the two. People are less likely to donate money to charities, for example, if they are hungry than if they are full; meanwhile, experimental subjects (excluding those who were dieting) who are put in a state of 'high desire for money' eat more M&Ms during a taste test than do people who are in a state of 'low desire for money.'* To the degree that our understanding of money is kluged onto our understanding of food, the fact that we think about money in relative terms may be little more than one more accident of our cognitive history.
'Christmas Clubs,' accounts into which people put away small amounts of money all year, with the goal of having enough money for Christmas shopping at the end of the year, provide another case in point. Although the goal is admirable, the behavior is (at least from the perspective of classical economics) irrational: Christmas Club accounts generally have low balances, so they tend to earn less interest than if the money were pooled with a person's other funds. And in any event, that money, sitting idle, might be better spent paying down high-interest credit card debt. Yet people do this sort of thing all the time, establishing real or imaginary accounts for different purposes, as if the money weren't all theirs.
Christmas Clubs and the like persist not because they are fiscally rational but because they are an