describes it:

In the early 1900s, as tipping became increasingly popular, it provoked great moral and social controversy. In fact, there were nationwide efforts, some successful, by state legislatures to abolish tipping by turning it into a punishable misdemeanor. In countless newspaper editorials and magazine articles, in etiquette books, and even in court, tips were closely scrutinized with a mix of curiosity, amusement, and ambivalence — and often open hostility. When in 1907, the government officially sanctioned tipping by allowing commissioned officers and enlisted men of the United States Navy to include tips as an item in their travel expense vouchers, the decision was denounced as an illegitimate endorsement of graft. Periodically, there were calls to organize anti-tipping leagues[38].

There is a conception of equality that would be corrupted by the efficiency that profiling embraces. That conception is a value to be weighed against efficiency. Although I believe this value is relatively weak in American life, who am I to say? The important point is not about what is strong or weak, but about the tension or conflict that lay dormant until revealed by the emerging technology of profiling.

The pattern should be familiar by now, because we have seen the change elsewhere. Once again, the code changes, throwing into relief a conflict of values. Whereas before there was relative equality because the information that enabled discrimination was too costly to acquire, now it pays to discriminate. The difference — what makes it pay — is the emergence of a code. The code changes, the behavior changes, and a value latent in the prior regime is displaced.

We could react by hobbling the code, thus preserving this world. We could create constitutional or statutory restrictions that prevent a move to the new world. Or we could find ways to reconcile this emerging world with the values we think are fundamental.

Solutions

I’ve identified two distinct threats to the values of privacy that the Internet will create. The first is the threat from “digital surveillance” — the growing capacity of the government (among others) to “spy” on your activities “in public.” From Internet access, to e-mail, to telephone calls, to walking on the street, digital technology is opening up the opportunity for increasingly perfect burdenless searches.

The second threat comes from the increasing aggregation of data by private (among other) entities. These data are gathered not so much to “spy” as to facilitate commerce. Some of that commerce exploits the source of the data (Wesley Clark’s cell phone numbers). Some of that commerce tries to facilitate commerce with the source of that data (targeted ads).

Against these two different risks, we can imagine four types of responses, each mapping one of the modalities that I described in Chapter 7:

• Law: Legal regulation could be crafted to respond to these threats. We’ll consider some of these later, but the general form should be clear enough. The law could direct the President not to surveil American citizens without reasonable suspicion, for example. (Whether the President follows the law is a separate question.) Or the law could ban the sale of data gathered from customers without express permission of the customers. In either case, the law threatens sanctions to change behavior directly. The aim of the law could either be to enhance the power of individuals to control data about them, or to disable such power (for example, by making certain privacy- related transactions illegal).

• Norms: Norms could be used to respond to these threats. Norms among commercial entities, for example, could help build trust around certain privacy protective practices.

• Markets: In ways that will become clearer below, the market could be used to protect the privacy of individuals.

• Architecture/Code: Technology could be used to protect privacy. Such technologies are often referred to as “Privacy Enhancing Technologies.” These are technologies designed to give the user more technical control over data associated with him or her.

As I’ve argued again and again, there is no single solution to policy problems on the Internet. Every solution requires a mix of at least two modalities. And in the balance of this chapter, my aim is to describe a mix for each of these two threats to privacy.

No doubt this mix will be controversial to some. But my aim is not so much to push any particular mix of settings on these modality dials, as it is to demonstrate a certain approach. I don’t insist on the particular solutions I propose, but I do insist that solutions in the context of cyberspace are the product of such a mix.

Surveillance

The government surveils as much as it can in its fight against whatever its current fight is about. When that surveillance is human — wiretapping, or the like — then traditional legal limits ought to apply. Those limits impose costs (and thus, using the market, reduce the incidence to those most significant); they assure at least some review. And, perhaps most importantly, they build within law enforcement a norm respecting procedure.

When that surveillance is digital, however, then it is my view that a different set of restrictions should apply. The law should sanction “digital surveillance” if, but only if, a number of conditions apply:

The purpose of the search enabled in the algorithm is described.

The function of the algorithm is reviewed.

The purpose and the function match is certified.

No action — including a subsequent search — can be taken against any individual on the basis of the algorithm without judicial review.

With very limited exceptions, no action against any individual can be pursued for matters outside the purpose described. Thus, if you’re looking for evidence of drug dealing, you can’t use any evidence discovered for prosecuting credit card fraud.

That describes the legal restrictions applied against the government in order to enhance privacy. If these are satisfied, then in my view such digital surveillance should not conflict with the Fourth Amendment.  In addition to these, there are privacy enhancing technologies (PETs) that should be broadly available to individuals as well. These technologies enable individuals to achieve anonymity in their transactions online. Many companies and activist groups help spread these technologies across the network.

Anonymity in this sense simply means non-traceability. Tools that enable this sort of non-traceability make it possible for an individual to send a message without the content of that message being traced to the sender. If implemented properly, there is absolutely no technical way to trace that message. That kind of anonymity is essential to certain kinds of communication.

Вы читаете Code 2.0
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×