“In the future, listen to the news and remember these words: ‘Up above,’” al Hilal replies. Es Sayed thinks that al Hilal is referring to an operation in his native Yemen, but al Hilal corrects him: “But the surprise attack will come from the other country, one of those attacks you will never forget.”

A moment later al Hilal says about the plan, “It is something terrifying that goes from south to north, east to west. The person who devised this plan is a madman, but a genius. He will leave them frozen [in shock].”

This is a tantalizing exchange. It would now seem that it refers to September 11. But in what sense was it a “forecast”? It gave neither time nor place nor method nor target. It suggested only that there were terrorists out there who liked to talk about doing something dramatic with an airplane—which did not, it must be remembered, reliably distinguish them from any other terrorists of the past thirty years.

In the real world, intelligence is invariably ambiguous. Information about enemy intentions tends to be short on detail. And information that’s rich in detail tends to be short on intentions. In April of 1941, for instance, the Allies learned that Germany had moved a huge army up to the Russian front. The intelligence was beyond dispute: the troops could be seen and counted. But what did it mean? Churchill concluded that Hitler wanted to attack Russia. Stalin concluded that Hitler was serious about attacking, but only if the Soviet Union didn’t meet the terms of the German ultimatum. The British foreign secretary, Anthony Eden, thought that Hitler was bluffing, in the hope of winning further Russian concessions. British intelligence thought—at least, in the beginning—that Hitler simply wanted to reinforce his eastern frontier against a possible Soviet attack. The only way for this piece of intelligence to have been definitive would have been if the Allies had had a second piece of intelligence—like the phone call between al Hilal and Es Sayed—that demonstrated Germany’s true purpose. Similarly, the only way the al Hilal phone call would have been definitive is if we’d also had intelligence as detailed as the Allied knowledge of German troop movements. But rarely do intelligence services have the luxury of both kinds of information. Nor are their analysts mind readers. It is only with hindsight that human beings acquire that skill.

The Cell tells us that, in the final months before September 11, Washington was frantic with worry:

A spike in phone traffic among suspected Al Qaeda members in the early part of the summer [of 2001], as well as debriefings of [an Al Qaeda operative in custody] who had begun cooperating with the government, convinced investigators that bin Laden was planning a significant operation—one intercepted Al Qaeda message spoke of a “Hiroshima-type” event—and that he was planning it soon. Through the summer, the CIA repeatedly warned the White House that attacks were imminent.

The fact that these worries did not protect us is not evidence of the limitations of the intelligence community. It is evidence of the limitations of intelligence.

4.

In the early 1970s, a professor of psychology at Stanford University named David L. Rosenhan gathered together a painter, a graduate student, a pediatrician, a psychiatrist, a housewife, and three psychologists. He told them to check into different psychiatric hospitals under aliases, with the complaint that they had been hearing voices. They were instructed to say that the voices were unfamiliar, and that they heard words like empty, thud, and hollow. Apart from that initial story, the pseudo patients were instructed to answer every question truthfully, to behave as they normally would, and to tell the hospital staff—at every opportunity—that the voices were gone and that they had experienced no further symptoms. The eight subjects were hospitalized, on average, for nineteen days. One was kept for almost two months. Rosenhan wanted to find out if the hospital staffs would ever see through the ruse. They never did.

Rosenhan’s test is, in a way, a classic intelligence problem. Here was a signal (a sane person) buried in a mountain of conflicting and confusing noise (a mental hospital), and the intelligence analysts (the doctors) were asked to connect the dots—and they failed spectacularly. In the course of their hospital stay, the eight pseudo patients were given a total of twenty-one hundred pills. They underwent psychiatric interviews, and sober case summaries documenting their pathologies were written up. They were asked by Rosenhan to take notes documenting how they were treated, and this quickly became part of their supposed pathology. “Patient engaging in writing behavior,” one nurse ominously wrote in her notes. Having been labeled as ill upon admission, they could not shake the diagnosis. “Nervous?” a friendly nurse asked one of the subjects as he paced the halls one day. “No,” he corrected her, to no avail, “bored.”

The solution to this problem seems obvious enough. Doctors and nurses need to be made alert to the possibility that sane people sometimes get admitted to mental hospitals. So Rosenhan went to a research-and- teaching hospital and informed the staff that at some point in the next three months, he would once again send over one or more of his pseudo patients. This time, of the 193 patients admitted in the three-month period, 41 were identified by at least one staff member as being almost certainly sane. Once again, however, they were wrong. Rosenhan hadn’t sent anyone over. In attempting to solve one kind of intelligence problem (overdiagnosis), the hospital simply created another problem (underdiagnosis). This is the second, and perhaps more serious, consequence of creeping determinism: in our zeal to correct what we believe to be the problems of the past, we end up creating new problems for the future.

Pearl Harbor, for example, was widely considered to be an organizational failure. The United States had all the evidence it needed to predict the Japanese attack, but the signals were scattered throughout the various intelligence services. The army and the navy didn’t talk to each other. They spent all their time arguing and competing. This was, in part, why the Central Intelligence Agency was created, in 1947—to ensure that all intelligence would be collected and processed in one place. Twenty years after Pearl Harbor, the United States suffered another catastrophic intelligence failure, at the Bay of Pigs: the Kennedy administration grossly underestimated the Cubans’ capacity to fight and their support for Fidel Castro. This time, however, the diagnosis was completely different. As Irving L. Janis concluded in his famous study of “groupthink,” the root cause of the Bay of Pigs fiasco was that the operation was conceived by a small, highly cohesive group whose close ties inhibited the beneficial effects of argument and competition. Centralization was now the problem. One of the most influential organizational sociologists of the postwar era, Harold Wilensky, went out of his way to praise the “constructive rivalry” fostered by Franklin D. Roosevelt, which, he says, is why the President had such formidable intelligence on how to attack the economic ills of the Great Depression. In his classic 1967 work Organizational Intelligence, Wilensky pointed out that Roosevelt would

use one anonymous informant’s information to challenge and check another’s, putting both on their toes; he recruited strong personalities and structured their work so that clashes would be certain…In foreign affairs, he gave Moley and Welles tasks that overlapped those of Secretary of State Hull; in conservation and power, he gave Ickes and Wallace identical missions; in welfare, confusing both functions and initials, he assigned PWA to Ickes, WPA to Hopkins; in politics, Farley found himself competing with other political advisors for control over patronage. The effect: the timely advertisement of arguments, with both the experts and the President pressured to consider the main choices as they came boiling up from below.

The intelligence community that we had prior to September 11 was the direct result of this philosophy. The FBI and the CIA were supposed to be rivals, just as Ickes and Wallace were rivals. But now we’ve changed our minds. The FBI and the CIA, Senator Shelby tells us disapprovingly, argue and compete with one another. The September 11 story, his report concludes, “should be an object lesson in the perils of failing to share information promptly and efficiently between (and within) organizations.” Shelby wants recentralization and more focus on cooperation. He wants a “central national level knowledge-compiling entity standing above and independent from the disputatious bureaucracies.” He thinks the intelligence service should be run by a small, highly cohesive group, and so he suggests that the FBI be removed from the counterterrorism business entirely. The FBI, according to Shelby, is governed by

deeply entrenched individual mind-sets that prize the production of evidence-supported narratives of defendant wrongdoing over the drawing of probabilistic inferences based on incomplete and fragmentary information in order to support decision-making… Law enforcement organizations handle information, reach conclusions, and ultimately just think differently than intelligence organizations. Intelligence analysts would doubtless make poor policemen, and it has become very clear that policemen make poor intelligence analysts.

Вы читаете What the Dog Saw
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×