weapons had amended Clausewitz’s famous definition ('War is… a continuation of political activity by other means'), because now the threat of attack could substitute for the attack itself. Thus came about the doctrine of symmetrical deterrence known later as the “balance of terror.” Different American administrations advocated it with different initials. There was, for example, MAD (Mutual Assured Destruction), based on the “second-strike” principle (the ability of the country attacked to retaliate in force). The vocabulary of destruction was enriched in the next decades. There was “Total Strategic Exchange,” meaning all-out nuclear war; MIRV (Multiple Independently Targetable Re- entry Vehicle), a missile firing a number of warheads simultaneously, each aimed at a different target; PENAID (Penetration Aids), dummy missiles to fool the opponent’s radar; and MARY (Maneuverable Re-entry), a missile capable of evading antimissiles and of hitting the target within fifty feet of the programmed “ground zero.” But to list even a hundredth of the succession of specialized terms is impossible here.

Although the danger of atomic warfare increased whenever “equality” was lessened, and therefore the rational thing would seem to have been to preserve that equality under multinational supervision, the antagonists did not reach an agreement despite repeated negotiations.

There were many reasons, which the authors of Weapons Systems divide into two groups. In the first group they see the pressure of traditional thinking in international politics. Tradition has determined that one should call for peace but prepare for war, upsetting the existing balance until the upper hand is gained. The second group of reasons are factors independent of human thought both political and nonpolitical; these have to do with the evolution of the major applied military technologies.

Each new possibility of technological improvement in weaponry became a reality, on the principle “If we don’t do it, they will.” Meanwhile, the doctrine of nuclear warfare went through changes. At one time it advocated a limited exchange of nuclear strikes (though no one knew exactly what the guarantee of the limitation would be); at another, its goal was the total annihilation of the enemy (all of whose population became “hostages” of a sort); at still another, it gave first priority to destroying the enemy’s military-industrial potential.

The ancient law of “sword and shield” still held sway in the evolution of weaponry. The shield took the form of hardening the silos that housed the missiles, while the sword to pierce the shield involved making the missiles increasingly accurate and, later, providing them with self-guidance systems and self-maneuverability. For atomic submarines the shield was the ocean; improved methods for their underwater detection constituted the sword.

Technological progress in defense sent electronic “eyes” into orbit, creating a high frontier of global reconnaissance able to spot missiles at the moment of launch. This was the shield that the new type of sword — the “killer satellite' — was to break, with a laser to blind the defending “eyes,” or with a lightninglike discharge of immense power to destroy the missiles themselves during their flight above the atmosphere.

But the hundreds of billions of dollars invested in building these higher and higher levels of conflict failed, ultimately, to produce any definite, and therefore valuable, strategic advantage — and for two very different, almost unrelated reasons.

In the first place, all these improvements and innovations, instead of increasing strategic security, offensive or defensive, only reduced it. Security was reduced because the global system of each superpower grew more and more complex, composed of an increasing number of different subsystems on land, sea, and air and in space. Military success required infallible communications to guarantee the optimum synchronization of operations. But all systems that are highly complex, whether they be industrial or military, biological or technological, whether they process information or raw material, are prone to breakdown, to a degree mathematically proportional to the number of elements that make up the system. Progress in military technology carried with it a unique paradox: the more sophisticated the weapon it produced, the greater was the role of chance (which could not be calculated) in the weapon’s successful use.

This fundamental problem must be explained carefully, because scientists were for a long time unable to base any technological activity on the randomness of complex systems. To counteract malfunctions in such systems, engineers introduced redundancy: power reserves, for example, or — as with the first American space shuttles (like the Columbia) — the doubling, even quadrupling of parallel, onboard computers. Total reliability is unattainable: if a system has a million elements and each element will malfunction only one time out of a million, a breakdown is certain.

The bodies of animals and plants consist of trillions of functioning parts, yet life copes with the phenomenon of inevitable failure. In what way? The experts call it the construction of reliable systems out of unreliable components. Natural evolution uses various tactics to counteract the fallibility of organisms: the capacity for self- repair or regeneration; surplus organs (this is why we have two kidneys instead of one, why a half-destroyed liver can still function as the body’s central chemical-processing plant, and why the circulatory system has so many alternate veins and arteries); and the separation of control centers for the somatic and psychic processes. This last phenomenon gave brain researchers much trouble: they could not understand why a seriously injured brain still functioned but a slightly damaged computer refused to obey its programs.

Merely doubling control centers and parts used in twentieth-century engineering led to the absurd in actual construction. If an automated spaceship going to a distant planet were built according to the directive of multiplying pilot computers, as in the shuttles, then it would have to contain — in view of the duration of the flight — not four or five but possibly fifty such computers. They would operate not by “linear logic” but by “voting': once the individual computers ceased functioning identically and thus diverged in their results, one would have to accept, as the right result, what was reached by the majority. But this kind of engineering parliamentarianism led to the production of giants burdened with the woes typical of democracies: contradictory views, plans, and actions. To such pluralism, to such programmed elasticity, there had to be a limit.

We should have begun much earlier — said the twenty-first-century specialists — to learn from biological evolution, whose several-billion-year existence demonstrates optimal strategic engineering. A living organism is not guided by “totalitarian centralism” or “democratic pluralism,” but by a strategy much more complex. Simplifying, we might call it a compromise between concentration and separation of the regulating centers.

Meanwhile, in the late-twentieth-century phase of the arms race, the role of unpredictable chance increased. When hours (or days) and miles (or hundreds of miles) separate defeat from victory, and therefore an error of command can be remedied by throwing in reserves, or retreating, or counterattacking, then there is room to reduce the element of chance. But when micromillimeters and nanoseconds determine the outcome, then chance enters like a god of war, deciding victory or defeat; it is magnified and lifted out of the microscopic scale of atomic physics. The fastest, best weapons system comes up against the Heisenberg uncertainty principle, which nothing can overcome, because that principle is a basic property of matter in the Universe. It need not be a computer breakdown in satellite reconnaissance or in missiles whose warheads parry defenses with laser beams; if a series of electronic defensive impulses is even a billionth of a second slow in meeting a similar series of offensive impulses, that is enough for a toss of the dice to decide the outcome of the Final Encounter.

Unaware of this state of affairs, the major antagonists of the planet devised two opposite strategies. One can call them the “scalpel” and the “hammer.” The constant escalation of pay-load megatonnage was the hammer; the improvement of detection and swift destruction in flight was the scalpel. They also reckoned on the deterrent of the “dead man’s revenge': the enemy would realize that even in winning he would perish, since a totally obliterated country would still respond — automatically and posthumously — with a strike that would make defeat universal. Such was the direction the arms race was taking, and such was its destination, which no one wanted but no one knew how to avoid.

How does the engineer minimize error in a very large, very complex system? He does trial runs to test it; he looks for weak spots, weak links. But there was no way of testing a system designed to wage global nuclear war, a system made up of surface, submarine, air-launched, and satellite missiles, antimissiles, and multiple centers of command and communications, ready to loose gigantic destructive forces in wave on wave of reciprocal atomic strikes. No maneuvers, no computer simulation, could re-create the actual conditions of such a battle.

Increasing speed of operation marked each new weapons system, particularly the decision-making function (to strike or not to strike, where, how, with what force held in reserve, at what risk, etc.), and this increasing speed also brought the incalculable factor of chance into play. Lightning-fast systems made lightning-fast mistakes. When a fraction of a second determined the safety or destruction of a region, a great metropolis, an industrial complex, or a large fleet, it was impossible to achieve military certainty. One could even say that victory had ceased to be

Вы читаете One Human Minute
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×