found several chunks of charred metal that confirmed what had previously been only suspected: the explosion was caused by a faulty seal in one of the shuttle’s rocket boosters, which had allowed a stream of flame to escape and ignite an external fuel tank.

Armed with this confirmation, a special presidential investigative commission concluded the following June that the deficient seal reflected shoddy engineering and lax management at NASA and its prime contractor, Morton Thiokol. Properly chastised, NASA returned to the drawing board, to emerge thirty-two months later with a new shuttle—Discovery—redesigned according to the lessons learned from the disaster. During that first post-Challenger flight, as America watched breathlessly, the crew of the Discovery held a short commemorative service. “Dear friends,” the mission commander, Captain Frederick H. Hauck, said, addressing the seven dead Challenger astronauts, “your loss has meant that we could confidently begin anew.” The ritual was complete. NASA was back.

But what if the assumptions that underlie our disaster rituals aren’t true? What if these public postmortems don’t help us avoid future accidents? Over the past few years, a group of scholars has begun making the unsettling argument that the rituals that follow things like plane crashes or the Three Mile Island crisis are as much exercises in self-deception as they are genuine opportunities for reassurance. For these revisionists, high-technology accidents may not have clear causes at all. They may be inherent in the complexity of the technological systems we have created.

This revisionism has now been extended to the Challenger disaster, with the publication of The Challenger Launch Decision, by the sociologist Diane Vaughan, which is the first truly definitive analysis of the events leading up to January 28, 1986. The conventional view is that the Challenger accident was an anomaly, that it happened because people at NASA had not done their job. But the study’s conclusion is the opposite: it says that the accident happened because people at NASA had done exactly what they were supposed to do. “No fundamental decision was made at NASA to do evil,” Vaughan writes. “Rather, a series of seemingly harmless decisions were made that incrementally moved the space agency toward a catastrophic outcome.”

No doubt Vaughan’s analysis will be hotly disputed, but even if she is only partly right, the implications of this kind of argument are enormous. We have surrounded ourselves in the modern age with things like power plants and nuclear weapons systems and airports that handle hundreds of planes an hour, on the understanding that the risks they represent are, at the very least, manageable. But if the potential for catastrophe is actually found in the normal functioning of complex systems, this assumption is false. Risks are not easily manageable, accidents are not easily preventable, and the rituals of disaster have no meaning. The first time around, the story of the Challenger was tragic. In its retelling, a decade later, it is merely banal.

2.

Perhaps the best way to understand the argument over the Challenger explosion is to start with an accident that preceded it—the near disaster at the Three Mile Island (TMI) nuclear-power plant in March of 1979. The conclusion of the president’s commission that investigated the TMI accident was that it was the result of human error, particularly on the part of the plant’s operators. But the truth of what happened there, the revisionists maintain, is a good deal more complicated than that, and their arguments are worth examining in detail.

The trouble at TMI started with a blockage in what is called the plant’s polisher —a kind of giant water filter. Polisher problems were not unusual at TMI, or particularly serious. But in this case the blockage caused moisture to leak into the plant’s air system, inadvertently tripping two valves and shutting down the flow of cold water into the plant’s steam generator.

As it happens, TMI had a backup cooling system for precisely this situation. But on that particular day, for reasons that no one really knows, the valves for the backup system weren’t open. They had been closed, and an indicator in the control room showing they were closed was blocked by a repair tag hanging from a switch above it. That left the reactor dependent on another backup system, a special sort of relief valve. But, as luck would have it, the relief valve wasn’t working properly that day, either. It stuck open when it was supposed to close, and, to make matters even worse, a gauge in the control room which should have told the operators that the relief valve wasn’t working was itself not working. By the time TMI’s engineers realized what was happening, the reactor had come dangerously close to a meltdown.

Here, in other words, was a major accident caused by five discrete events. There is no way the engineers in the control room could have known about any of them. No glaring errors or spectacularly bad decisions were made that exacerbated those events. And all the malfunctions—the blocked polisher, the shut valves, the obscured indicator, the faulty relief valve, and the broken gauge—were in themselves so trivial that individually they would have created no more than a nuisance. What caused the accident was the way minor events unexpectedly interacted to create a major problem.

This kind of disaster is what the Yale University sociologist Charles Perrow has famously called a normal accident. By normal, Perrow does not mean that it is frequent; he means that it is the kind of accident one can expect in the normal functioning of a technologically complex operation. Modern systems, Perrow argues, are made up of thousands of parts, all of which interrelate in ways that are impossible to anticipate. Given that complexity, he says, it is almost inevitable that some combinations of minor failures will eventually amount to something catastrophic. In a classic 1984 treatise on accidents, Perrow takes examples of well-known plane crashes, oil spills, chemical-plant explosions, and nuclear- weapons mishaps and shows how many of them are best understood as normal. If you saw the movie Apollo 13, in fact, you have seen a perfect illustration of one of the most famous of all normal accidents: the Apollo flight went awry because of the interaction of failures of the spacecraft’s oxygen and hydrogen tanks, and an indicator light that diverted the astronauts’ attention from the real problem.

Had this been a “real” accident—if the mission had run into trouble because of one massive or venal error— the story would have made for a much inferior movie. In real accidents, people rant and rave and hunt down the culprit. They do, in short, what people in Hollywood thrillers always do. But what made Apollo 13 unusual was that the dominant emotion was not anger but bafflement—bafflement that so much could go wrong for so little apparent reason. There was no one to blame, no dark secret to unearth, no recourse but to re-create an entire system in place of one that had inexplicably failed. In the end, the normal accident was the more terrifying one.

3.

Was the Challenger explosion a normal accident? In a narrow sense, the answer is no. Unlike what happened at TMI, its explosion was caused by a single, catastrophic malfunction: the so-called O-rings that were supposed to prevent hot gases from leaking out of the rocket boosters didn’t do their job. But Vaughan argues that the O-ring problem was really just a symptom. The cause of the accident was the culture of NASA, she says, and that culture led to a series of decisions about the Challenger that very much followed the contours of a normal accident.

The heart of the question is how NASA chose to evaluate the problems it had been having with the rocket boosters’ O-rings. These are the thin rubber bands that run around the lips of each of the rocket’s four segments, and each O-ring was meant to work like the rubber seal on the top of a bottle of preserves, making the fit between each part of the rocket snug and airtight. But from as far back as 1981, on one shuttle flight after another, the O-rings had shown increasing problems. In a number of instances, the rubber seal had been dangerously eroded—a condition suggesting that hot gases had almost escaped. What’s more, O-rings were strongly suspected to be less effective in cold weather, when the rubber would harden and not give as tight a

Вы читаете What the Dog Saw
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×