To capture control of technology, and through it gain some influence over the accelerative thrust in general, we must, therefore, begin to submit new technology to a set of demanding tests before we unleash it in our midst. We must ask a whole series of unaccustomed questions about any innovation before giving it a clean bill of sale.

First, bitter experience should have taught us by now to look far more carefully at the potential physical side effects of any new technology. Whether we are proposing a new form of power, a new material, or a new industrial chemical, we must attempt to determine how it will alter the delicate ecological balance upon which we depend for survival. Moreover, we must anticipate its indirect effects over great distances in both time and space. Industrial waste dumped into a river can turn up hundreds, even thousands of miles away in the ocean. DDT may not show its effects until years after its use. So much has peen written about this that it seems hardly necessary to belabor the point further.

Second, and much more complex, we must question the long-term impact of a technical innovation on the social, cultural and psychological environment. The automobile is widely believed to have changed the shape of our cities, shifted home ownership and retail trade patterns, altered sexual customs and loosened family ties. In the Middle East, the rapid spread of transistor radios is credited with having contributed to the resurgence of Arab nationalism. The birth control pill, the computer, the space effort, as well as the invention and diffusion of such 'soft' technologies as systems analysis, all have carried significant social changes in their wake.

We can no longer afford to let such secondary social and cultural effects just 'happen.' We must attempt to anticipate them in advance, estimating, to the degree possible, their nature, strength and timing. Where these effects are likely to be seriously damaging, we must also be prepared to block the new technology. It is as simple as that. Technology cannot be permitted to rampage through the society.

It is quite true that we can never know all the effects of any action, technological or otherwise. But it is not true that we are helpless. It is, for example, sometimes possible to test new technology in limited areas, among limited groups, studying its secondary impacts before releasing it for diffusion. We could, if we were imaginative, devise living experiments, even volunteer communities, to help guide our technological decisions. Just as we may wish to create enclaves of the past where the rate of change is artificially slowed, or enclaves of the future in which individuals can pre-sample future environments, we may also wish to set aside, even subsidize, special high-novelty communities in which advanced drugs, power sources, vehicles, cosmetics, appliances and other innovations are experimentally used and investigated.

A corporation today will routinely field test a product to make sure it performs its primary function. The same company will market test the product to ascertain whether it will sell. But, with rare exception, no one post- checks the consumer or the community to determine what the human side effects have been. Survival in the future may depend on our learning to do so.

Even when life-testing proves unfeasible, it is still possible for us systematically to anticipate the distant effects of various technologies. Behavioral scientists are rapidly developing new tools, from mathematical modeling and simulation to so-called Delphi analyses, that permit us to make more informed judgments about the consequences of our actions. We are piecing together the conceptual hardware needed for the social evaluation of technology; we need but to make use of it.

Third, an even more difficult and pointed question: Apart from actual changes in the social structure, how will a proposed new technology affect the value system of the society? We know little about value structures and how they change, but there is reason to believe that they, too, are heavily impacted by technology. Elsewhere I have proposed that we develop a new profession of 'value impact forecasters' – men and women trained to use the most advanced behavioral science techniques to appraise the value implications of proposed technology.

At the University of Pittsburgh in 1967 a group of distinguished economists, scientists, architects, planners, writers, and philosophers engaged in a day-long simulation intended to advance the art of value forecasting. At Harvard, the Program on Technology and Society has undertaken work relevant to this field. At Cornell and at the Institute for the Study of Science in Human Affairs at Columbia, an attempt is being made to build a model of the relationship between technology and values, and to design a game useful in analyzing the impact of one on the other. All these initiatives, while still extremely primitive, give promise of helping us assess new technology more sensitively than ever before.

Fourth and finally, we must pose a question that until now has almost never been investigated, and which is, nevertheless, absolutely crucial if we are to prevent widespread future shock. For each major technological innovation we must ask: What are its accelerative implications?

The problems of adaptation already far transcend the difficulties of coping with this or that invention or technique. Our problem is no longer the innovation, but the chain of innovations, not the supersonic transport, or the breeder reactor, or the ground effect machine, but entire inter-linked sequences of such innovations and the novelty they send flooding into the society.

Does a proposed innovation help us control the rate and direction of subsequent advance? Or does it tend to accelerate a host of processes over which we have no control? How does it affect the level of transience, the novelty ratio, and the diversity of choice? Until we systematically probe these questions, our attempts to harness technology to social ends – and to gain control of the accelerative thrust in general – will prove feeble and futile.

Here, then, is a pressing intellectual agenda for the social and physical sciences. We have taught ourselves to create and combine the most powerful of technologies. We have not taken pains to learn about their consequences. Today these consequences threaten to destroy us. We must learn, and learn fast.

A TECHNOLOGY OMBUDSMAN

The challenge, however, is not solely intellectual; it is political as well. In addition to designing new research tools – new ways to understand our environment – we must also design creative new political institutions for guaranteeing that these questions are, in fact, investigated; and for promoting or discouraging (perhaps even banning) certain proposed technologies. We need, in effect, a machinery for screening machines.

A key political task of the next decade will be to create this machinery. We must stop being afraid to exert systematic social control over technology. Responsibility for doing so must be shared by public agencies and the corporations and laboratories in which technological innovations are hatched.

Any suggestion for control over technology immediately raises scientific eyebrows. The specter of ham- handed governmental interference is invoked. Yet controls over technology need not imply limitations on the freedom to conduct research. What is at issue is not discovery but diffusion, not invention but application. Ironically, as sociologist Amitai Etzioni points out, 'many liberals who have fully accepted Keynesian economic controls take a laissez-faire view of technology. Theirs are the arguments once used to defend laissez-faire economics: that any attempt to control technology would stifle innovation and initiative.'

Warnings about overcontrol ought not be lightly ignored. Yet the consequences of lack of control may be far worse. In point of fact, science and technology are never free in any absolute sense. Inventions and the rate at which they are applied are both influenced by the values and institutions of the society that gives rise to them. Every society, in effect, does pre-screen technical innovations before putting them to widespread use.

The haphazard way in which this is done today, however, and the criteria on which selection is based, need to be changed. In the West, the basic criterion for filtering out certain technical innovations and applying others remains economic profitability. In communist countries, the ultimate tests have to do with whether the innovation will contribute to overall economic growth and national power. In the former, decisions are private and pluralistically decentralized. In the latter, they are public and tightly centralized.

Both systems are now obsolete – incapable of dealing with the complexity of superindustrial society. Both tend to ignore all but the most immediate and obvious consequences of technology. Yet, increasingly, it is these non-immediate and non-obvious impacts that must concern us. 'Society must so organize itself that a proportion of the very ablest and most imaginative of scientists are continually concerned with trying to foresee the long-term effects of new technology,' writes O. M. Solandt, chairman of the Science Council of Canada. 'Our present method of depending on the alertness of individuals to foresee danger and to form pressure groups that try to correct mistakes will not do for the future.'

One step in the right direction would be to create a technological ombudsman – a public agency charged with receiving, investigating, and acting on complaints having to do with the irresponsible application of technology.

Who should be responsible for correcting the adverse effects of technology? The rapid diffusion of

Вы читаете Future Shock
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×