In hindsight, the most memorable images of science fiction often have more to do with our anxieties in the past (the writer’s present) than with those singular and ongoing scenarios that make up our life as a species: our real futures, our ongoing present.
Many of us, even today, or most particularly today, must feel as though we have silicon chips embedded in our brains. Some of us, certainly, are not entirely happy with that feeling. Some of us must wish that ubiquitous computating would simply go away and leave us alone, a prospect that seems increasingly unlikely.
But that does not, I think, mean that we will one day, as a species, submit to the indignity of the chip. If only because the chip will almost certainly be as quaint an object as the vacuum tube or the slide rule.
From the viewpoint of bioengineering, a silicon chip is a large and rather complex shard of glass. Inserting a silicon chip into the human brain involves a certain irreducible inelegance of scale. It’s scarcely more elegant, relatively, than inserting a steam engine into the same tissue. It may be technically possible, but why should we even want to attempt such a thing?
I suspect that medicine and the military will both find reasons for attempting such a thing, at least in the short run, and that medicine’s reasons may at least serve to counter someone’s acquired or inherited disability. If I were to lose my eyes, I would quite eagerly submit to some sort of surgery promising a video link to the optic nerves (and once there, why not insist on full-channel cable and a Web browser?). The military’s reasons for insertion would likely have something to do with what I suspect is the increasingly archaic job description of “fighter pilot,” or with some other aspect of telepresent combat, in which weapons in the field are remotely controlled by distant operators. At least there’s still a certain macho frisson to be had in the idea of deliberately embedding a tactical shard of glass in one’s head, and surely crazier things have been done in the name of king and country.
But if we do do it, I doubt we’ll be doing it for very long, as various models of biological and nanomolecular computing are looming rapidly into view. Rather than plug a piece of hardware into our gray matter, how much more elegant to extract some brain cells, plop them into a Petri dish, and graft on various sorts of gelatinous computing goo. Slug it all back into the skull and watch it run on blood sugar, the way a human brain’s supposed to. Get all the functions and features you want, without that clunky-junky twentieth-century hardware thing. You really don’t need complicated glass to crunch numbers, and computing goo probably won’t be all that difficult to build. (The more tricky aspect here may be turning data into something that brain cells understand. If you knew how to make brain cells understand pull-down menus, you’d probably know everything you needed to know about brain cells, period. But we are coming to know, relatively, an awful lot about brain cells.)
Our hardware is likely to turn into something like us a lot faster than we are likely to turn into something like our hardware. Our hardware is evolving at the speed of light, while we are still the product, for the most part, of unskilled labor.
But there is another argument against the need to implant computing devices, be they glass or goo. It’s a very simple one, so simple that some have difficulty grasping it. It has to do with a certain archaic distinction we still tend to make, a distinction between computing and “the world.” Between, if you like, the virtual and the real.
I very much doubt that our grandchildren will understand the distinction between that which is a computer and that which isn’t.
Or, to put it another way, they will not know “computers” as any distinct category of object or function. This, I think, is the logical outcome of genuinely ubiquitous computing: the wired world. The wired world will consist, in effect, of a single unbroken interface. The idea of a device that “only” computes will perhaps be the ultimate archaism in a world in which the fridge or the toothbrush are potentially as smart as any other object, including you. A world in which intelligent objects communicate, routinely and constantly, with each other and with us. In this world, there will be no need for the physical augmentation of the human brain, as the most significant, and quite unthinkably powerful, augmentation will have already taken place postgeographically, via distributed processing.
You won’t need smart goo in your brain, because your fridge and your toothbrush will be very smart indeed, enormously smart, and they will be there for you, constantly and always.
So it won’t, I don’t think, be a matter of computers crawling buglike down into the most intimate chasms of our being, but of humanity crawling buglike out into the dappled light and shadow of the presence of that which we will have created, which we are creating now, and which seems to me to already be in process of re-creating us.
William Gibson’s Filmless Festival
First up:
Suzanne was a member of Jean-Luc Godard’s Dziga Vertov Group, circa 1970–71, where she functioned as the embodiment of Liberated Woman. Trained by the great documentary filmmaker Joris Ivens, she cut Errol Morris’s
What becomes apparent, listening to Suzanne and then watching her film, is that
As the film ends I glance over at my daughter Claire, 16, and see that she’s excited, too, even though the movie’s dialog is in a variant of English that would send American video distributors running to the nearest subtitle house.
Suzanne tells us that her next feature, also shot digitally in Jamaica, is called