individual is doing or likes at any particular moment, the maturing data infrastructure produces a panopticon beyond anything Bentham ever imagined.

“Orwell” is the word you’re looking for. And while I believe that analogies to Orwell are just about always useless, let’s make one comparison here nonetheless. While the ends of the government in 1984 were certainly vastly more evil than anything our government would ever pursue, it is interesting to note just how inefficient, relative to the current range of technologies, Orwell’s technologies were. The central device was a “telescreen” that both broadcasted content and monitored behavior on the other side. But the great virtue of the telescreen was that you knew what it, in principle, could see. Winston knew where to hide, because the perspective of the telescreen was transparent[12]. It was easy to know what it couldn’t see, and hence easy to know where to do the stuff you didn’t want it to see.

That’s not the world we live in today. You can’t know whether your search on the Internet is being monitored. You don’t know whether a camera is trying to identify who you are. Your telephone doesn’t make funny clicks as the NSA listens in. Your e-mail doesn’t report when some bot has searched it. The technologies of today have none of the integrity of the technologies of 1984. None are decent enough to let you know when your life is being recorded.

There’s a second difference as well. The great flaw to the design of 1984 was in imagining just how it was that behavior was being monitored. There were no computers in the story. The monitoring was done by gaggles of guards watching banks of televisions. But that monitoring produced no simple way for the guards to connect their intelligence. There was no search across the brains of the guards. Sure, a guard might notice that you’re talking to someone you shouldn’t be talking to or that you’ve entered a part of a city you shouldn’t be in. But there was no single guard who had a complete picture of the life of Winston.

Again, that “imperfection” can now be eliminated. We can monitor everything and search the product of that monitoring. Even Orwell couldn’t imagine that.

I’ve surveyed a range of technologies to identify a common form. In each, the individual acts in a context that is technically public. I don’t mean it should be treated by the law as “public” in the sense that privacy should not be protected there. I’m not addressing that question yet. I mean only that the individual is putting his words or image in a context that he doesn’t control. Walking down 5th Avenue is the clearest example. Sending a letter is another. In both cases, the individual has put himself in a stream of activity that he doesn’t control.

The question for us, then, is what limits there should be — in the name of “privacy” — on the ability to surveil these activities. But even that question puts the matter too broadly. By “surveil”, I don’t mean surveillance generally. I mean the very specific kind of surveillance the examples above evince. I mean what we could call “digital surveillance.”

“Digital surveillance” is the process by which some form of human activity is analyzed by a computer according to some specified rule. The rule might say “flag all e-mail talking about Al Qaeda.” Or it might say “flag all e-mail praising Governor Dean.” Again, at this point I’m not focused upon the normative or legal question of whether such surveillance should be allowed. At this point, we’re just working through definitions. In each of the cases above, the critical feature in each is that a computer is sorting data for some follow-up review by some human. The sophistication of the search is a technical question, but there’s no doubt that its accuracy is improving substantially.

So should this form of monitoring be allowed?

I find when I ask this question framed precisely like this that there are two polar opposite reactions. On the one hand, friends of privacy say that there’s nothing new here. There’s no difference between the police reading your mail, and the police’s computer reading your e-mail. In both cases, a legitimate and reasonable expectation of privacy has been breached. In both cases, the law should protect against that breach.

On the other hand, friends of security insist there is a fundamental difference. As Judge Richard Posner wrote in the Washington Post, in an article defending the Bush Administration’s (extensive[13]) surveillance of domestic communications, “machine collection and processing of data cannot, as such, invade privacy.” Why? Because it is a machine that is processing the data. Machines don’t gossip. They don’t care about your affair with your co-worker. They don’t punish you for your political opinions. They’re just logic machines that act based upon conditions. Indeed, as Judge Posner argues, “this initial sifting, far from invading privacy (a computer is not a sentient being), keeps most private data from being read by any intelligence officer. ” We’re better off having machines read our e-mail, Posner suggests, both because of the security gain, and because the alternative snoop — an intelligence officer — would be much more nosey.

But it would go too far to suggest there isn’t some cost to this system. If we lived in a world where our every communication was monitored (if?), that would certainly challenge the sense that we were “left alone.” We would be left alone in the sense a toddler is left in a playroom — with parents listening carefully from the next room. There would certainly be something distinctively different about the world of perpetual monitoring, and that difference must be reckoned in any account of whether this sort of surveillance should be allowed.

We should also account for the “best intentions” phenomenon. Systems of surveillance are instituted for one reason; they get used for another. Jeff Rosen has cataloged the abuses of the surveillance culture that Britain has become[14]: Video cameras used to leer at women or for sensational news stories. Or in the United States, the massive surveillance for the purpose of tracking “terrorists” was also used to track domestic environmental and antiwar groups[15].

But let’s frame the question in its most compelling form. Imagine a system of digital surveillance in which the algorithm was known and verifiable: We knew, that is, exactly what was being searched for; we trusted that’s all that was being searched for. That surveillance was broad and indiscriminate. But before anything could be done on the basis of the results from that surveillance, a court would have to act. So the machine would spit out bits of data implicating X in some targeted crime, and a court would decide whether that data sufficed either to justify an arrest or a more traditional search. And finally, to make the system as protective as we can, the only evidence that could be used from this surveillance would be evidence directed against the crimes being surveilled for. So for example, if you’re looking for terrorists, you don’t use the evidence to prosecute for tax evasion. I’m not saying what the targeted crimes are; all I’m saying is that we don’t use the traditional rule that allows all evidence gathered legally to be usable for any legal end.

Would such a system violate the protections of the Fourth Amendment? Should it?

The answer to this question depends upon your conception of the value protected by the Fourth Amendment. As I described in Chapter 6, that amendment was targeted against indiscriminate searches and “general warrants” — that is, searches that were not particularized to any particular individual and the immunity that was granted to those engaging in that search. But those searches, like any search at that time, imposed burdens on the person being searched. If you viewed the value the Fourth Amendment protected as the protection from the unjustified burden of this indiscriminate search, then this digital surveillance would seem to raise no significant problems. As framed above, they produce no burden at all unless sufficient evidence is discovered to induce a court to authorize a search.

Вы читаете Code 2.0
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×