neutrons. Their work confirmed Fermi’s own experiments. He had guessed correctly that when neutrons cracked an atomic nucleus, they would set more neutrons free. Each would scatter like a subatomic shotgun pellet, and with enough uranium handy, they would find more nuclei to destroy. The process would cascade, and a lot of energy would be released. He suspected Nazi Germany would be interested in that.

On December 2, 1942, in a squash court beneath the stadium at the University of Chicago, Fermi and his new American colleagues produced a controlled nuclear chain reaction. Their primitive reactor was a beehive- shaped pile of graphite bricks laced with uranium. By inserting rods coated with cadmium, which absorbs neutrons, they could moderate the exponential shattering of uranium atoms to keep it from getting out of hand.

Less than three years later, in the New Mexico desert, they did just the opposite. The nuclear reaction this time was intended to go completely out of control. Immense energy was released, and within a month the act was repeated twice, over two Japanese cities. More than 100,000 people died instantly, and the dying continued long after the initial blast. Ever since, the human race has been simultaneously terrified and fascinated by the double deadliness of nuclear fission: fantastic destruction followed by slow torture.

If we left this world tomorrow—assuming by some means other than blowing ourselves to bits—we would leave behind about 30,000 intact nuclear warheads. The chance of any exploding with us gone is effectively zero. The fissionable material inside a basic uranium bomb is separated into chunks that, to achieve the critical mass necessary for detonation, must be slammed together with a speed and precision that don’t occur in nature. Dropping them, striking them, plunging them in water, or rolling a boulder over them would do nothing. In the tiny chance that the polished surfaces of enriched uranium in a deteriorated bomb actually met, unless forced together at gunshot speed, they would fizzle—albeit in a very messy way.

A plutonium weapon contains a single fissionable ball that must be forcibly, exactly compressed to at least twice its density to explode. Otherwise, it’s simply a poisonous lump. What will happen, however, is that bomb housings will ultimately corrode, exposing the hot innards of these devices to the elements. Since weapons-grade plutonium-239 has a half-life of 24,110 years, even if it took an ICBM cone 5,000 years to disintegrate, most of the 10 to 20 pounds of plutonium it contained would not have degraded. The plutonium would throw off alpha particles—clumps of protons and neutrons heavy enough to be blocked by fur or even thick skin, but disastrous to any creature unlucky enough to inhale them. (In humans, 1 millionth of a gram can cause lung cancer.) In 125,000 years, there would be less than a pound of it, though it would still be plenty lethal. It would take 250,000 years before the levels were lost in the Earth’s natural background radiation.

At that point, however, whatever lives on Earth would still have to contend with the still-deadly dregs of 441 nuclear plants.

2. Sunscreen

When big, unstable atoms like uranium decay naturally, or when we rip them apart, they emit charged particles and electromagnetic rays similar to the strongest X-rays. Both are potent enough to alter living cells and DNA. As these deformed cells and genes reproduce and replicate, we sometimes get another kind of chain reaction, called cancer.

Since background radiation is always present, organisms have adjusted accordingly by selecting, evolving, or sometimes just succumbing. Anytime we raise the natural background dosage, we force living tissue to respond. Two decades prior to harnessing nuclear fission, first for bombs, then for power plants, humans had already let one electromagnetic genie loose—the result of a goof we wouldn’t recognize until nearly 60 years later. In that instance, we didn’t coax radiation out but let it sneak in.

That radiation was ultraviolet, a considerably lower energy wave than the gamma rays emitted from atomic nuclei, but it was suddenly present at levels unseen since the beginning of life on Earth. Those levels are still rising, and although we have hopes to correct that over the next half century, our untimely departure could leave them in an elevated state far longer.

Ultraviolet rays helped to fashion life as we know it—and, oddly enough, they created the ozone layer itself, our shield against too much exposure to them. Back when the primordial goo of the planet’s surface was being pelted with unimpeded UV radiation from the sun, at some pivotal instant—perhaps sparked by a jolt of lightning— the first biological mix of molecules jelled. Those living cells mutated rapidly under the high energy of ultraviolet rays, metabolizing inorganic compounds and turning them into new organic ones. Eventually, one of these reacted to the presence of carbon dioxide and sunlight in the primitive atmosphere by giving off a new kind of exhaust: oxygen.

That gave ultraviolet rays a new target. Picking off pairs of oxygen atoms joined together—O2 molecules—they split them apart. The two singles would immediately latch onto nearby O2 molecules, forming O3: ozone. But UV easily breaks the ozone molecule’s extra atom off, reforming oxygen; just as quickly, that atom sticks to another pair, forming more ozone until it absorbs more ultraviolet and spins off again.

Gradually, beginning about 10 miles above the surface, a state of equilibrium emerged: ozone was constantly being created, pulled apart, and recombined, and thus constantly occupying UV rays so that they never reached the ground. As the layer of ozone stabilized, so did the life on Earth it was shielding. Eventually, species evolved that could never have tolerated the former levels of UV radiation bombardment. Eventually, one of them was us.

In the 1930s, however, humans started undermining the oxygen-ozone balance, which had remained relatively constant since soon after life began. That’s when we started using Freon, the trademark name for chlorofluorocarbons, the man-made chlorine compounds in refrigeration. Called CFCs for short, they seemed so safely inert that we put them into aerosol cans and asthma-medication inhalers, and blew them into polymer foams to make disposable coffee cups and running shoes.

In 1974, University of California-Irvine chemists F. Sherwood Rowland and Mario Molina began to wonder where CFCs went once those refrigerators or materials broke down, since they were so impervious to combining with anything else. Eventually, they decided that hitherto indestructible CFCs must be floating to the stratosphere, where they would finally meet their match in the form of powerful ultraviolet rays. The molecular slaughter would free pure chlorine, a voracious gobbler of loose oxygen atoms, whose presence kept those same ultraviolet rays away from Earth.

No one paid Rowland and Molina much heed until 1985, when Joe Farman, a British researcher in Antarctica, discovered that part of the sky was missing. For decades, we’d been dissolving our UV screen by soaking it with chlorine. Since then, in unprecedented cooperation, the nations of the world have tried to phase out ozone-eating chemicals. The results are encouraging, but still mixed: Ozone destruction has slowed, but a black market in CFCs thrives, and some are still legally produced for “basic domestic needs” in developing countries. Even the replacements we commonly use today, hydrochlorofluorocarbons, HCFCs, are simply milder ozone-destroyers, scheduled to be phased out themselves—though the question of with what isn’t easily answered.

Quite apart from ozone damage, both HCFCs and CFCs—and their most common chlorine-free substitute, hydrofluorocarbons, HFCs—have many times the potential of carbon dioxide to exacerbate global warming. The use of all these alphabetical concoctions will stop, of course, if human activity does, but the damage we did to the sky may last a lot longer. The best current hope is that the South Pole’s hole, and the thinning of the ozone layer everywhere else, will heal by 2060, after destructive substances are exhausted. This assumes that something safe will have replaced them, and that we’ll have found ways to get rid of existing supplies that haven’t yet drifted skyward. Destroying something designed to be indestructible, however, turns out to be expensive, requiring sophisticated, energy-intensive tools such as argon plasma arcs and rotary kilns that aren’t readily available in much of the world.

As a result, especially in developing countries, millions of tons of CFCs are still used or linger in aging equipment, or are mothballed. If we vanish, millions of CFC and HCFC automobile air conditioners, and millions more domestic and commercial refrigerators, refrigerated trucks and railroad cars, as well as home and industry air- cooling units, will all finally crack and give up the chlorofluorocarbonated ghost of a 20th-century idea that went very awry.

Вы читаете The World Without Us
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×