This book is based on the idea that technologies can and should be developed and chosen because they are helpful for nonviolent struggle. This in turn is based on a number of assumptions about the nature of technology.
In chapter 2 on militarised technology, I argued that the military influences the development of technology in a number of ways, including through funding, applications, employment and suppression of challenges, plus via deep structures including the state, capitalism, bureaucracy and patriarchy. In later chapters, I outlined a variety of actual and potential technological developments that would be of special value for nonviolent struggle. In making these arguments I have assumed that:
technology is shaped by a range of social factors;
any given technological system is more useful for some purposes than others (e.g. military versus nonviolent struggle);
it is possible to influence the process of technological development to serve desirable social goals.
It would be possible to attempt to justify these three assumptions through a set of abstract arguments. My approach, however, has been to build an argument — with plenty of examples — based on these assumptions and to implicitly justify the assumptions by demonstrating the insights available. In this appendix I continue this strategy by outlining some common approaches to studying technology and seeing whether they provide useful ways to tackle the topic of technology for nonviolent struggle. This will illuminate some of the shortcomings of certain approaches and help clarify my approach.[1]
Essentialist Approaches
An “essentialist” approach to technology assumes that it has essential or inherent features. Common essentialist views are that technology is good, bad, neutral or inevitable.
Some people think that technology is inherently good. Military technology provides the best example that it isn’t. Bullets and bombs kill. People who are killed by bullets and bombs would not see these artefacts as good — not good for them, anyway. It is difficult to argue that weapons of mass destruction are inherently good. In fact, it was the development of nuclear weapons that made many technologists realise that not everything they produced was of benefit to humanity.
When people think that technology is inherently good, they usually make an implicit assumption: the only choice is between present technology — all of it, including stereos, baby bottles and biological weapons — and no technology at all. If it is assumed instead that it is possible to make choices about technology, namely to have some artefacts but not others, then the idea that “technology is good” collapses. It should be obvious that the technology-is-good model is of no value in analysing problems with military technology or developing technology for nonviolent struggle.
A contrary view, held by a few, is that technology is inherently bad. This idea is similarly flawed. After all, some technologies help at least some people: wearing glasses helps some people to see better, even if the production of the glasses causes pollution and unpleasant work conditions. It is only possible to argue that technology is inherently bad if there is no choice between technologies.
Many people are attracted to the idea that technology is inherently neutral, believing that it is either good or bad depending on the way it is used. This is the so-called use-abuse model: technology can be either used (for good purposes) or abused (for bad purposes). It is certainly true that many artefacts can be used for both good and bad purposes. For example, a computer word processor can be used to produce lists of dissidents who are to be arrested or killed, or it can be used to produce articles proclaiming the value of dissent. Computers often make tasks easier, but they also can lead to people losing their jobs. But does this mean that all artefacts are neutral?
An alternative perspective is that particular artefacts are easier to use for some purposes than others. For example, if you want to clean your hands, soap is more helpful than a newspaper or a candle. After all, artefacts are designed for particular purposes. Of course, they might be used for other purposes. A toothbrush is designed for cleaning teeth, but it can also be used to clean shoes or even for painting. But a toothbrush is not very helpful for sweeping the street or eating peas. This point should be obvious: any particular artefact is not equally useful for all purposes.
In this sense, artefacts are not neutral. A pair of dice might be said to be neutral if all possible rolls from 2 to 12 are possible. But the dice would be called biased if they gave 12 half the time. In this sort of sense, artefacts are biased. They potentially can be used for many different purposes, but they are much easier and more likely to be used for certain purposes.
This applies clearly to military technologies. A nuclear explosion can be used to heat a house or fry an egg, but this is neither the intended nor a convenient use of the technology. Thumbscrews are designed and used for torture. Their actual use as paperweights or parts of a sculpture, or their potential use for medical operations, hardly makes them neutral in any practical sense.
The idea that technologies are neutral is usually maintained by taking a broad perspective. For example, it can be claimed that computers are neutral because they can be used for beneficial or harmful purposes. But this only means that sometimes they can be used for beneficial purposes and sometimes for harmful ones. It doesn’t mean that these applications are equally easy or likely. Nor does it mean that the benefits and harm are spread around equally.
To pierce the illusion of neutrality it is only necessary to take a closer look, for example at the computer built into the nose cone of a cruise missile, enabling the missile to use altitude readings to assess where it is and to adjust its course as necessary. The computer is designed to help the missile reach its target and destroy it. This computer is not neutral. The idea of neutrality may be attractive to people because it removes the necessity to think carefully about the values built into the design, choice and use of technology.
The idea that technology is neutral provides no leverage for analysing technology for nonviolent struggle. After all, if technology is neutral, that presumably means that any technology can be used for nonviolent struggle and there is no obvious means for choosing between technologies.
Sometimes it seems like technologies have a will of their own. The telephone and the automobile have spread throughout society and no one seems able to stop their use. What is called “technological determinism” can be interpreted in various ways. It can mean that once a new technology is developed — such as guns or nuclear weapons — it has an inherent momentum leading to its widespread use. It can mean that there is general pattern of technological development that is inevitable, such as the use of steel, electricity or computers.
Simple interpretations of technological determinism don’t stand up to scrutiny.[2] There are plenty of technologies that have been developed but have never become dominant, such as housing with passive solar design, supersonic transport aircraft, microfiche publishing and cryonic suspension. How can it be said that technology determines its own development when so many technologies are failures? One answer is that some technologies are “better” and hence more successful. But this provides a circular argument, at least when the way to determine whether a technology is better than another is to see whether it is more successful. Technological determinism provides a convenient excuse for ignoring the human choices, especially the exercise of power, in development of technology.
Technological determinism provides no help in analysing technology for nonviolent struggle. It assumes that military technologies are dominant due to their own inherent properties; nonviolent alternatives have not been successful and hence may be ignored. My entire analysis is based on a rejection of technological determinism and an endorsement of the view that social choice is the basis for technological development and that that choice should become more participatory.
However, by adopting the topic of technology for nonviolent struggle, it is hard to avoid sounding like a technological determinist at times. Because the focus is on technology, it is possible to create the impression that by adopting a suitable technology, the cause of nonviolent struggle is automatically advanced. My view is that development and use of technology is always a social process and, as such, is one of a number of social locations for promoting or waging nonviolent struggle.