more fair trade. In fact, there are loads of ways to allocate effort between defense spending and trade policy to make better off whichever coalition forms.7

What, then, is the national interest? We might have to conclude that except under the direst circumstances there is no such thing as “the national interest,” even if the term refers to what a large majority favors. That is surprising, perhaps, but it follows logically from the idea that people will align themselves behind policies that are closer to what they want against policies that are farther from what they advocate. It just happens that any time there are trade-offs between alternative ways to spend money or to exert influence, there are likely to be many different spending or influence combinations that beat the prevailing view. None can be said to be a truer reflection of the national interest than another; that reflection is in the eyes of the beholder, not in some objective assessment of national well-being. So much for the venerable notion that our leaders pursue the national interest, or, for that matter, that business executives single-mindedly foster shareholder value. I suppose, freed as they are to build a coalition that wants whatever it is they also want, that our leaders really are free to pursue their own interests and to call that the national interest or the corporate interest.

WHAT IS THE OTHER GUY’S BEHAVIOR?

(DOES HE HAVE GOOD CARDS OR NOT?)

To understand how interests frame so many of the questions we have at stake, game theory still requires that people behave in a logically consistent way within those interests. That does not mean that people cannot behave in surprising ways, for surely they can. If you’ve ever played the game Mastermind, you’ve confronted the difficulties of logic directly. In Mastermind—a game I’ve used with students to teach them about really probing their beliefs— one player sets up four (or, in harder versions, more) colored pegs selected from among six colors in whatever order he or she chooses. The rest of the players cannot see the pegs. They propose color sequences of pegs and are told that yes, they got three colors right, or no, they didn’t get any right, or yes, they got one color in the right position but none of the others. In this way, information accumulates from round to round. By keeping careful track of the information about what is true and what is false, you gradually eliminate hypotheses and converge on a correct view of what order the colored pegs are in. This is the point behind a game like Mastermind, Battleship, or charades. It is also one point behind the forecasting games I designed and use to predict and engineer events.

The key to any of these games is sorting out the difference between knowledge and beliefs. Different players in any game are likely to start out with different beliefs because they don’t have enough information to know the true lay of the land. It is fine to sustain beliefs that could be consistent with what’s observed, but it’s not sensible to hold on to beliefs after they have been refuted by what is happening around us. Of course, sorting out when beliefs and actions are inconsistent requires working out the incentives people have to lie, mislead, bluff, and cheat.

In Mastermind this is easy to do because the game has rules that stipulate the order of guessing and that require the person who placed the pegs to respond honestly to proposed color sequences suggested by other players. There is no point to the game if the person placing the pegs lies to everyone else. But even when everyone tells the truth, it is easy to slip into serious lapses in logic that can lead to entirely wrong beliefs. That is something to be careful about.

Slipping into wrong beliefs is a problem for many of us. It is easy to look at facts selectively and to reach wrong conclusions. That is a major problem, for instance, with the alleged police practice of profiling, or some people’s judgment about the guilt or innocence of others based on thin evidence that is wrongly assessed. There are very good reasons why the police and we ordinary folk ought not to be too hasty in jumping to conclusions.

Let me give an example to help flesh out how easily we can slip into poor logical thinking. Baseball is beset by a scandal over performance-enhancing drugs. Suppose you know that the odds someone will test positive for steroids are 90 percent if they actually used steroids. Does that mean when someone tests positive we can be very confident that they used steroids? Journalists seem to think so. Congress seems to think so. But it just isn’t so. To formulate a policy we need an answer to the question, How likely is it that someone used steroids if they test positive? It is not enough to know how likely they are to test positive if they use steroids. Unfortunately, we cannot easily know the answer to the question we really care about. We can know whether someone tested positive, but that could be a terrible basis for deciding whether the person cheated. A logically consistent use of probabilities— working out the real risks—can help make that clear.

Imagine that out of every 100 baseball players (only) 10 cheat by taking steroids (game theory notwithstanding, I am an optimist) and that the tests are accurate enough that 9 out of every 10 cheaters test positive. To evaluate the likelihood of guilt or innocence we still need to know how many honest players test positive—that is, how often the tests give us a false positive answer. Tests are, after all, far from perfect. Just imagine that while 90 out of every 100 players do not cheat, 10 percent of the honest players nevertheless test (falsely) positive. Looking at these numbers it’s easy to think, well, hardly anyone gets a false positive (only 10 percent of the innocent) and almost every guilty party gets a true positive (90 percent of the guilty), so knowing whether a person tested positive must make us very confident of their guilt. Wrong!8

With the numbers just laid out, 9 out of 10 cheaters test positive and 9 out of 90 innocent ball players also test positive. So, 9 out of 18 of the positive test results include cheaters and 9 out of 18 include absolutely innocent baseball players. In this example, the odds that a player testing positive actually uses steroids are fifty-fifty, just the flip of a coin. That is hardly enough to ruin a person’s career and reputation. Who would want to convict so many innocents just to get the guilty few? It is best to take seriously the dictum “innocent until proven guilty.”

The calculation we just did is an example of Bayes’ Theorem.9 It provides a logically sound way to avoid inconsistencies between what we thought was true (a positive test means a player uses steroids) and new information that comes our way (half of all players testing positive do not use steroids). Bayes’ Theorem compels us to ask probing questions about what we observe. Instead of asking, “What are the odds that a baseball player uses performance-enhancing drugs?” we ask, “What are the odds that a baseball player uses performance- enhancing drugs given that we know he tested positive for such drugs and we know the odds of testing positive under different conditions?”

Bayes’ Theorem provides a way to calculate how people digest new information. It assumes that everyone uses such information to check whether what they believe is consistent with their new knowledge. It highlights how our beliefs change—how they are updated, in game-theory jargon—in response to new information that reinforces or contradicts what we thought was true. In that way, the theorem, and the game theorists who rely on it, view beliefs as malleable rather than as unalterable biases lurking in a person’s head.

This idea of updating beliefs leads us to the next challenge. Suppose a baseball player who had a positive (guilty) test result is called to testify before Congress in the steroid scandal. Now suppose he knows of the odds sketched above. Aware of these statistics, and knowing that any self-respecting congressperson is also aware of them, the baseball player knows that Congress, if citing only a positive test result as their evidence, in fact has little on him, no matter how much outrage they muster. The player, in other words, knows Congress is bluffing. But of course Congress knows this as well, so they have subpoenaed the player’s trainer, who is coming in to testify right after the player. Is this just another bluff by Congress, tightening the screws to elicit a confession with the threat of perjury looming? Whether the player is guilty or not, perhaps he shrugs off the move, in effect calling Congress’s raising of the stakes. Now what? Does Congress actually have anything, or will they be embarrassed for going on a fishing expedition and dragging an apparently innocent man through the mud? Will the player adamantly profess innocence knowing he’s guilty (but maybe he really isn’t), and should we shrug off the declarations of innocence lightly, as it seems so many of us do? Is Congress bluffing? Is the player bluffing? Is everyone bluffing? These are tough problems, and they are right up game theory’s alley!

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату