seal. On the morning of January 28, 1986, the shuttle launchpad was encased in ice, and the temperature at liftoff was just above freezing. Anticipating these low temperatures, engineers at Morton Thiokol, the manufacturer of the shuttle’s rockets, recommended that the launch be delayed. Morton Thiokol brass and NASA, however, overruled the recommendation, and that decision led both the president’s commission and numerous critics since to accuse NASA of egregious—if not criminal—misjudgment.

Vaughan doesn’t dispute that the decision was fatally flawed. But, after reviewing thousands of pages of transcripts and internal NASA documents, she can’t find any evidence of people acting negligently, or nakedly sacrificing safety in the name of politics or expediency. The mistakes that NASA made, she says, were made in the normal course of operation. For example, in retrospect it may seem obvious that cold weather impaired O-ring performance. But it wasn’t obvious at the time. A previous shuttle flight that had suffered worse O-ring damage had been launched in 75-degree heat. And on a series of previous occasions when NASA had proposed—but eventually scrubbed for other reasons—shuttle launches in weather as cold as 41 degrees, Morton Thiokol had not said a word about the potential threat posed by the cold, so its pre-Challenger objection had seemed to NASA not reasonable but arbitrary. Vaughan confirms that there was a dispute between managers and engineers on the eve of the launch but points out that in the shuttle program, disputes of this sort were commonplace. And, while the president’s commission was astonished by NASA’s repeated use of the phrases acceptable risk and acceptable erosion in internal discussion of the rocket-booster joints, Vaughan shows that flying with acceptable risks was a standard part of NASA culture. The lists of acceptable risks on the space shuttle, in fact, filled six volumes. “Although [O-ring] erosion itself had not been predicted, its occurrence conformed to engineering expectations about large-scale technical systems,” she writes. “At NASA, problems were the norm. The word anomaly was part of everyday talk…The whole shuttle system operated on the assumption that deviation could be controlled but not eliminated.”

What NASA had created was a closed culture that, in her words, “normalized deviance” so that to the outside world, decisions that were obviously questionable were seen by NASA’s management as prudent and reasonable. It is her depiction of this internal world that makes her book so disquieting: when she lays out the sequence of decisions that led to the launch—each decision as trivial as the string of failures that led to the near disaster at TMI—it is difficult to find any precise point where things went wrong or where things might be improved next time. “It can truly be said that the Challenger launch decision was a rule-based decision,” she concludes. “But the cultural understandings, rules, procedures, and norms that always had worked in the past did not work this time. It was not amorally calculating managers violating rules that were responsible for the tragedy. It was conformity.”

4.

There is another way to look at this problem, and that is from the standpoint of how human beings handle risk. One of the assumptions behind the modern disaster ritual is that when a risk can be identified and eliminated, a system can be made safer. The new booster joints on the shuttle, for example, are so much better than the old ones that the overall chances of a Challenger-style accident’s ever happening again must be lower, right? This is such a straightforward idea that questioning it seems almost impossible. But that is just what another group of scholars has done, under what is called the theory of risk homeostasis. It should be said that within the academic community, there are huge debates over how widely the theory of risk homeostasis can and should be applied. But the basic idea, which has been laid out brilliantly by the Canadian psychologist Gerald Wilde in his book Target Risk, is quite simple: under certain circumstances, changes that appear to make a system or an organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.

Consider, for example, the results of a famous experiment conducted several years ago in Germany. Part of a fleet of taxicabs in Munich was equipped with antilock brake systems (ABS), a technological innovation that vastly improves braking, particularly on slippery surfaces. The rest of the fleet was left alone, and the two groups—which were otherwise perfectly matched—were placed under careful and secret observation for three years. You would expect the better brakes to make for safer driving. But that is exactly the opposite of what happened. Giving some drivers ABS made no difference at all in their accident rate; in fact, it turned them into markedly inferior drivers. They drove faster. They made sharper turns. They showed poorer lane discipline. They braked harder. They were more likely to tailgate. They didn’t merge as well, and they were involved in more near misses. In other words, the ABS systems were not used to reduce accidents; instead, the drivers used the additional element of safety to enable them to drive faster and more recklessly without increasing their risk of getting into an accident. As economists would say, they consumed the risk reduction, they didn’t save it.

Risk homeostasis doesn’t happen all the time. Often—as in the case of seat belts, say—compensatory behavior only partly offsets the risk reduction of a safety measure. But it happens often enough that it must be given serious consideration. Why are more pedestrians killed crossing the street at marked crosswalks than at unmarked crosswalks? Because they compensate for the “safe” environment of a marked crossing by being less vigilant about oncoming traffic. Why did the introduction of childproof lids on medicine bottles lead, according to one study, to a substantial increase in fatal child poisonings? Because adults became less careful in keeping pill bottles out of the reach of children.

Risk homeostasis also works in the opposite direction. In the late 1960s, Sweden changed over from driving on the left-hand side of the road to driving on the right, a switch that one would think would create an epidemic of accidents. But, in fact, the opposite was true. People compensated for their unfamiliarity with the new traffic patterns by driving more carefully. During the next twelve months, traffic fatalities dropped 17 percent before returning slowly to their previous levels. As Wilde only half-facetiously argues, countries truly interested in making their streets and highways safer should think about switching over from one side of the road to the other on a regular basis.

It doesn’t take much imagination to see how risk homeostasis applies to NASA and the space shuttle. In one frequently quoted phrase, Richard Feynman, the Nobel Prize-winning physicist who served on the Challenger commission, said that at NASA decision-making was “a kind of Russian roulette.” When the O-rings began to have problems and nothing happened, the agency began to believe that “the risk is no longer so high for the next flights,” Feynman said, and that “we can lower our standards a little bit because we got away with it last time.” But fixing the O-rings doesn’t mean that this kind of risk-taking stops. There are six whole volumes of shuttle components that are deemed by NASA to be as risky as O-rings. It is entirely possible that better O-rings just give NASA the confidence to play Russian roulette with something else.

This is a depressing conclusion, but it shouldn’t come as a surprise. The truth is that our stated commitment to safety, our faithful enactment of the rituals of disaster, has always masked a certain hypocrisy. We don’t really want the safest of all possible worlds. The national 55-mile-per-hour speed limit probably saved more lives than any other single government intervention of the past generation. But the fact that Congress lifted it last month with a minimum of argument proves that we would rather consume the recent safety advances of things like seat belts and air bags than save them. The same is true of the dramatic improvements that have been made in recent years in the design of aircraft and flight-navigation systems. Presumably, these innovations could be used to bring down the airline accident rate as low as possible. But that is not what consumers want. They want air travel to be cheaper, more reliable, or more convenient, and so those safety advances have been at least partly consumed by flying and landing planes in worse weather and heavier traffic conditions.

What accidents like the Challenger should teach us is that we have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life. At some point in the future—for the most mundane of reasons, and with the very best of intentions—a

Вы читаете What the Dog Saw
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×