technological production beyond professional scientists and engineers and their primary patrons to the general public.

The difference in the development process can be pictured in the following way. For military R&D, scientists, engineers and military testing are somewhat insulated from other influences. “External” social influences on military science and technology exist, to be sure — examples include strategic policy, competition for funding, and influence of the peace movement. But a key “social influence” is actually the very organisation of the R&D as a professional, in-house enterprise.

In a more participatory process of R&D for nonviolent struggle, there would be no clear distinction between researchers and the rest of the population. Of course, some people may be much more active than others in the process of technological innovation. But in this model, such innovation depends vitally on interaction and cooperation with a wide cross-section of the population. Furthermore, this interaction and cooperation is likely to lead to contributions by others — those who in the military model would be simply users of the technology. This participatory model of R&D undermines the special role and status of professional scientists and engineers as the exclusive holders of expertise about science and technology.[5]

There are some precedents for this sort of participatory R&D. Citizen groups in Japan — often with participation by some scientists — have investigated environmental problems, using simple techniques such as talking to people about local health problems and testing for the presence of radioactivity by observing specially sensitive plants. Such an approach was more successful in determining the cause of Minamata disease — due to mercury pollution in the ocean — than heavily funded teams of traditional scientists using sophisticated ocean sampling and computer models.[6]

Many parts of the women’s health movement — most prominently, the Boston Women’s Health Book Collective — have reassessed available evidence and drawn on their own personal experiences to provide a different perspective about women’s health, one that is less responsive to the interests of drug companies and medical professionals and more responsive to the concerns and experiences of women themselves.[7]

AIDS activists in the US, concerned about the slow and cumbersome processes for testing and approving drugs to treat AIDS, developed their own criteria and procedures and tried them out with drugs, some of which were produced and distributed illicitly. Their efforts and political pressure led to changes in official procedures.[8]

These examples show that nonscientists can make significant contributions to the process of doing science, and in some cases do better or cause changes in establishment approaches. However, the issue is not a competition between scientists and nonscientists, but rather promotion of a fruitful interaction between them. Scientists, to do their jobs effectively, need to bring the community “into the lab” and nonscientists need to learn what it means to do research. In the process, the distinction between the two groups would be blurred.

A good case study of the two models is the debate over encryption of digital communication described in chapter 5. The military model was embodied, literally and figuratively, in the Clipper chip, designed by the US National Security Agency so that authorised parties could decipher any encrypted messages. Clipper was designed in secrecy. It was based on the Skipjack algorithm, which remained a secret. Clipper and related systems were planned for installation in telephones and computer networks essentially as “black boxes,” which people would use but not understand. If Clipper had been a typical military technology, such as a ballistic missile or fuel-air explosive, it would have been implemented in military arenas with little debate (except perhaps from peace activists) and certainly little public input into the choice of technology.

At first glance, the participatory alternative to Clipper is public key encryption, widely favoured by computer users. But rather than the alternative being a particular technology, it is more appropriate to look at the process of choosing a technology. Encryption has been the subject of vigorous and unending discussions, especially on computer conferences. Different algorithms have been developed, tested, scrutinised and debated. This has occurred at a technical level and also a social level. Various encryption systems have been examined by top experts, who have then presented their conclusions for all to examine. As well, the social uses and implications of different systems have been debated. Last but not least, lots of people have used the encryption systems themselves. The contrast to Clipper is striking.

Even the more participatory process used in developing and assessing encryption is still limited to a small part of the population. This is inevitable, since not everyone can be involved in looking at every technology. The point is that the process is relatively open: there are far more people who have investigated cyptography in relation to public key encryption than could ever be the case with a government- sponsored technology such as Clipper. The other important point is that the participatory process requires informed popular acceptance of the technology, rather than imposition through government pressure. The best indicator of the participatory process is a vigorous and open debate involving both technical and social considerations.

The case of encryption shows that participatory R&D does not eliminate the role of expertise. What it does reduce is the automatic association of expertise with degrees, jobs in prestigious institutions, high rank, awards, and service to vested interests. Expertise has to be tested in practical application. Just as an athlete cannot claim current superiority on the basis of degrees or past victories, so an expert in a process of participatory R&D cannot rely on credentials, but is always subject to the test of current practice.

These comments on participatory R&D are inevitably tentative. By their very nature, participatory systems are shaped by the process of participation itself, so what they become is not easy to predict.

10. TECHNOLOGY POLICY FOR NONVIOLENT STRUGGLE

The basic idea of technology for nonviolent struggle is straightforward. Actually bringing this alternative about — doing relevant research and developing, testing and implementing relevant technologies — is much more difficult. In this chapter I discuss priorities for moving towards technology that serves nonviolent rather than violent struggle.

The term usually used when discussing priorities of this sort is “policy,” in this case technology policy. The idea of policy, though, has come to refer primarily to decisions and implementation by governments. Governments are certainly important players in R&D, but not the only ones. After discussing priorities, I look at what can be done by three particular groups: governments; scientists and engineers; and community groups.[1]

Before beginning, it is worth emphasising that there are enormous institutional and conceptual obstacles to promoting nonviolent struggle.[2] Many government and corporate leaders would do everything they could to oppose development of grassroots capacity for nonviolent action, since this would pose a direct threat to their power and position. Furthermore, the idea of popular nonviolent struggle is extremely challenging to many people given standard expectations that the “authorities” or experts will take care of social problems, including defence. Therefore, to talk of technology policy for nonviolent struggle may seem utopian. But if alternatives are ever to be brought about, it is important to talk about them now. Without vision and dialogue, there is little hope of building a nonviolent future.

Priorities

The traditional idea of technological advance was the “linear model”: first there is scientific research; the results of the research are applied, thereby producing a technological application; finally, the technology is taken up in the marketplace. Among those who study technological innovation, this simple model is pretty much discredited. Innovation seldom happens this way.

Another model is “market pull.” There is a demand for a certain product or service. This encourages technologists to search for a suitable solution; sometimes this involves doing directed research.

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×