In real life there are plenty of incentives for others (and for us) to lie. That is certainly true for athletes, corporate executives, national leaders, poker players, and all the rest of us. Therefore, to predict the future we have to reflect on when people are likely to lie and when they are most likely to tell the truth. In engineering the future, our task is to find the right incentives so that people tell the truth, or so that, when it helps our cause, they believe our lies.
One way of eliciting honest responses is to make repeated lying really costly. Bluffing at poker, for instance, can be costly exactly because other players sometimes don’t believe big bets, and don’t fold as a result. If their hand is better, the bluff comes straight out of the liar’s pocket. So the central feature of a game like five-card draw is not figuring out the probability of drawing an inside straight or three of a kind, although that’s certainly useful too. It’s about convincing others that your hand is stronger than it really is. Part of the key to accumulating bargaining chips, whether in poker or diplomacy, is engineering the future by exploiting leverage that really does not exist. Along with taking prudent risks, creating leverage is one of the most important features in changing outcomes. Of course, that is just a polite way of saying that it’s good to know when and how to lie.
Betting, whether with chips, stockholders’ money, perjury charges, or soldiers, can lead others to wrong inferences that benefit the bettor; but gambling always suffers from two limitations. First, it can be expensive to bet more than a hand is worth. Second, everyone has an interest in trying to figure out who is bluffing and who is being honest. Raising the stakes helps flush out the bluffers. The bigger the cumulative bet, the costlier it is to pretend to have the resolve to see a dispute through when the cards really are lousy. How much pain anyone is willing to risk on a bluff, and how similar their wagering is when they are bluffing and when they are really holding good cards, is crucial to the prospects of winning or of being unmasked. That, of course, is why diplomats, lawyers, and poker players need a good poker face, and it is why, for example, you take your broker’s advice more seriously if she invests a lot of her own money in a stock she’s recommending.
Getting the best results comes down to matching actions to beliefs. Gradually, under the right circumstances, exploiting information leads to consistency between what people see, what they think, and what they do, just as it does in Mastermind. Convergence in thinking facilitates deals, bargains, and the resolution of disputes.
With that, we’ve just completed the introductory course in game theory. Nicely done! Now we’re ready to go on to the more advanced course. In the next chapter we look in more depth at how the very fact of our being strategic changes everything going on around us. That will set the stage for working out how we can use strategy to change things to be better for ourselves and those we care about and, if we are altruistic enough, maybe even for almost everyone.
3
GAME THEORY 102
GAME THEORY 101 started us off thinking about how different people are from particles. In short, we are strategists. We calculate before we interact. And with 101 under our belts, we know enough to delve more closely into the subtleties of strategizing.
Of the many lessons game theory teaches us, one of particular import is that the future—or at least its anticipation—can cause the past, perhaps even more often than the past causes the future. Sound crazy? Ask yourself, do Christmas tree sales cause Christmas? This sort of reverse causality is fundamental to how game theorists work through problems to anticipate outcomes. It is very different from conventional linear thinking. Let me offer an example where failing to recognize how the future shapes the past can lead to really bad consequences.
Many believe that arms races cause war.1 With that conviction in mind, policy makers vigorously pursue arms control agreements to improve the prospects of peace. To be sure, controlling arms means that if there is war, fewer people are killed and less property is destroyed. That is certainly a good thing, but that is not why people advocate arms control. They want to make war less likely. But reducing the amount or lethality of arms just does not do that.
The standard account of how arms races cause war involves what game theorists call a hand wave—that is, at some point the analyst waves his hands in the air instead of providing the logical connection from argument to conclusions. The arms-race hand wave goes like this:
When a country builds up its arms it makes its adversaries fear that their security is at risk. In response, they build up their own arms to defend themselves. The other side looks at that buildup—seeing their own as purely defensive—and tries to protect itself by developing still more and better weapons. Eventually the arms race produces a massive overcapacity to kill and destroy. Remember how many times over the U.S. and Soviet nuclear arsenals could destroy the world! So, as the level of arms—ten thousand nuclear-tipped missiles, for instance— grows out of proportion to the threat, things spiral out of control (that’s the hand wave—
Wait a moment, let’s slow down and think about that. The argument boils down to claiming that when the costs of war get to be really big—arms are out of proportion to the threat—war becomes more likely. That’s really odd. Common sense and basic economics teach us that when the cost of anything goes up, we generally buy less, not more. Why should that be any less true of war?
True, just about every war has been preceded by a buildup in weapons, but that is not the relevant observation. It is akin to looking at a baseball player’s positive test for steroids as proof that he cheats. What we want to know is how often the acquisition of lots more weapons leads to war, not how often wars are preceded by the purchase of arms. The answer to the question we care about is,
By looking at wars and then asking whether there had been an arms race, we confuse cause and effect. We ignore all the instances in which arms may successfully deter fighting exactly because the anticipated destruction is so high. Big wars are very rare precisely because when we expect high costs we look for ways to compromise. That, for instance, is why the 1962 Cuban Missile Crisis ended peacefully. That is why every major crisis between the United States and the Soviet Union throughout the cold war ended without the initiation of a hot war. The fear of nuclear annihilation kept it cold. That is why lots of events that could have ignited world wars ended peacefully and are now all but forgotten.
So, in war and especially in peace, reverse causality is at work. When policy makers turn to arms control deals, thinking they are promoting peace, they are taking much bigger risks than they seem to realize. Failing to think about reverse causation leads to poor predictions of what is likely to happen, and that can lead to dangerous decisions and even to catastrophic war.
We will see many more instances of this kind of reasoning in later chapters. We will examine, for example, why most corporate fraud probably is not sparked by executive greed and why treaties to control greenhouse gas emissions may not be the best way to fight global warning. Each example reinforces the idea that correlation is not causation. They also remind us that the logic of reverse causation—called endogeneity in game theory—means that what we actually “observe”—such as arms races followed by war—are often biased samples.