surprise, then, when my computer program, written by me and fed only with my data, predicted an entirely different result. It predicted that Charan Singh would become prime minister and that he would include someone named Y. B. Chavan in his cabinet, and that they would gain support—albeit briefly—from Indira Gandhi, then the recently ousted prime minister. The model also predicted that the new Indian government would be incapable of governing and so would soon fall.

I found myself forced to choose between personal opinion and my commitment to logic and evidence as the basis for coming to conclusions about politics. I believed in the logic behind my model and I believed in the correctness of the data I had jotted down. After staring at the output, working out how my own program came to a conclusion so different from my personal judgment, I chose science over punditry. In fact, I told colleagues at Rochester what the model’s prediction was even before I reported back to the State Department. When I spoke with the official at State he was taken aback. He noted that no one else was suggesting this result and that it seemed strange at best. He asked me how I had come to this judgment. When I told him I’d used a computer program based on a model of decision making that I was designing, he just laughed and urged me not to repeat that to anyone.

A few weeks later, Charan Singh became the prime minister with Y. B. Chavan as his deputy prime minister, with support from Indira Gandhi. And a few months after that, Charan Singh’s government unraveled, Indira Gandhi withdrew her support, and a new election was called, just as the computer-generated forecast had indicated. This got me pretty excited. Here was a case where my personal judgment had been wrong, and yet my knowledge was the only source of information the computer model had. The model came up with the right answer and I didn’t. Clearly there were at least two possibilities: I was just lucky, or I was onto something.

Luck is great, but I’m not a great believer in luck alone as an explanation for results. Sure, rare events happen—rarely. I set out to push my model by testing it against lots of cases, hoping to learn whether it really worked. I applied it to prospective leadership changes in the Soviet Union; to questions of economic reform in Mexico and Brazil; and to budgetary decisions in Italy—that is, to wide-ranging questions about politics and economics. The model worked really well on these cases—so well, in fact, that it attracted the attention of people in the government who heard me present some of the analyses at academic conferences. Eventually this led to a grant from the Defense Advanced Research Projects Agency (DARPA), a research arm of the Department of Defense (and the sponsors of research that fostered the development of the Internet long before Al Gore “invented” it). They gave me seventeen issues to examine, and as it happened, the model—by then somewhat more sophisticated—got all seventeen right. Government analysts who provided the data the model needed—we’ll talk more about that later—didn’t do nearly as well. Confident that I was onto something useful, I started a small consulting company with a couple of colleagues who had their own ideas about how to predict big political events. Now, many years later, I operate a small consulting firm with my partner and former client, Harry Roundell. Harry, formerly a managing director at J. P. Morgan, and I apply a much more sophisticated version of my 1979 model to interesting business and government problems. We’ll see lots of examples in the pages to come.

It’s easy to see if predictions are right or wrong when they are precise, and almost impossible to judge them when they are cloaked in hazy language. In my experience, government and private businesses want firm answers. They get plenty of wishy-washy predictions from their staff. They’re looking for more than “On the one hand this, but on the other hand that”—and I give it to them. Sometimes that leads to embarrassment, but that’s the point. If people are to pay attention to predictions, they need real evidence as to the odds that the predictions are right. Being reluctant to put predictions out in public is the first sign that the prognosticator doesn’t have confidence in what he’s doing.

According to a declassified CIA assessment, the predictions for which I’ve been responsible have a 90 percent accuracy rate.6 This is not a reflection of any great wisdom or insight on my part—I have little enough of both, and believe me, there are plenty of ivy-garlanded professors and NewsHour intellectuals who would agree. What I do have is the lesson I learned in my “Aha!” moment: Politics is predictable. All that is needed is a tool—like my model—that takes basic information, evaluates it by assuming everyone does what they think is best for them, and produces reliable assessments of what they will do and why they will do it. Successful prediction does not rely on any special personal qualities. You don’t need to walk around conjuring the future, plucking predictions out of thin air. There’s no need for sheep entrails, tea leaves, or special powers. The key to good prediction is getting the logic right, or “righter” than any way that is achieved by other means of prediction.

Accurate prediction relies on science, not artistry—and certainly not sleight of hand. It is a reflection of the power of logic and evidence, and testimony to the progress being made in demystifying the world of human thought and decision. There are lots of powerful tools for making predictions. Applied game theory, my chosen method, is right for some problems but not all. Statistical forecasting is a terrific way to address questions that don’t involve big breaks from past patterns. Election prognosticators, whether at universities, polling services, or blogs on the Web (like Nate Silver, the son of an old family friend) all estimate the influence of variables on past outcomes and project the weight of that influence onto current circumstances. Online election markets work well too. They work just the way jelly bean contests work. Ask lots of people how many jelly beans there are in a jar, and just about no one will be close to being right, but the average of their predictions is often very close to the true number. These methods have terrific records of accuracy when applied to appropriate problems.

Statistical methods are certainly not limited to just studying and predicting elections. They help us understand harder questions too, such as what leads to international crises or what influences international commerce and investments. Behavioral economics is another prominent tool grounded in the scientific method to derive insights from sophisticated statistical and experimental tests. Steven Levitt, one of the authors of Freakonomics, has introduced millions of readers to behavioral economics, giving them insights into important and captivatingly interesting phenomena.

Game-theory models, with their focus on strategic behavior, are best for predicting the business and national- security issues I get asked about. I say this having done loads of statistical studies on questions of war and peace, nation building, and much more, as well as historical and contemporary case studies. Not every method is right for every problem, but for predicting the future the way I do, game theory is the way to go, and I’ll try to convince you of that not only by highlighting the track record of my method, but also by daring to be embarrassed later in this book when I make predictions about big future events.

Prediction with game theory requires learning how to think strategically about other people’s problems the way you think about your own, and it means empathizing with how others think about the same problems. A fast laptop and the right software help, but any problem whose outcome depends on many people and involves real or imagined negotiations is susceptible to accurate forecasting drawn from basic methods.

In fact, not only can we learn to look ahead at what is likely to happen, but—and this is far more useful than mere prediction and the visions of seers past and present—we can learn to engineer the future to produce happier outcomes. Sadly, our government, business, and civic leaders rarely take advantage of this possibility. Instead, they rely on wishful thinking and yearn for “wisdom” instead of seeking help from cutting-edge science. In place of analytic tools they count on the ever-present seat of their pants.

We live in a world in which billions—even trillions—of dollars are spent on preparations for war. Yet we spend hardly a penny on improving decision making to determine when or whether our weapons should be used, let alone how we can negotiate successfully. The result: we get bogged down in far-off places with little understanding of why we are there or how to advance our goals, and even less foresight into the roadblocks that will lie in our way. That is no way to run a twenty-first-century government when science can help us do so much better.

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату