little bit of messiness or friction in the context of speech is a value, not a cost.

But are these values different just because I say they are? No. They are only different if we say they are different. In real space we treat them as different. My core argument is that we choose how we want to treat them in cyberspace.

Regulating Spam

Spam is perhaps the most theorized problem on the Net. There are scores of books addressing how best to deal with the problem. Many of these are filled with ingenious technical ideas for ferreting out spam, from advanced Bayesian filter techniques to massive redesigns of the e-mail system.

But what is most astonishing to me as a lawyer (and depressing to me as the author of Code) is that practically all of these works ignore one important tool with which the problem of spam could be addressed: the law. It’s not that they weigh the value of the law relative to, for example, Bayesian filters or the latest in heuristic techniques, and conclude it is less valuable than these other techniques. It’s that they presume the value of the law is zero — as if spam were a kind of bird flu which lived its own life totally independently of what humans might want or think.

This is an extraordinary omission in what is, in effect, a regulatory strategy. As I have argued throughout this book, the key to good policy in cyberspace is a proper mix of modalities, not a single silver bullet. The idea that code alone could fix the problem of spam is silly — code can always be coded around, and, unless the circumventers are not otherwise incentivized, they will code around it. The law is a tool to change incentives, and it should be a tool used here as well.

Most think the law can’t play a role here because they think spammers will be better at evading the law than they are at evading spam filters. But this thinking ignores one important fact about spam. “Spam” is not a virus. Or at least, when talking about “spam”, I’m not talking about viruses. My target in this part is communication that aims at inducing a commercial transaction. Many of these transactions are ridiculous — drugs to stop aging, or instant weight loss pills. Some of these transactions are quite legitimate — special sales of overstocked products, or invitations to apply for credit cards. But all of these transactions aim in the end to get something from you: Money. And crucially, if they aim to get money from you, then there must be someone to whom you are giving your money. That someone should be the target of regulation.

So what should that regulation be?

The aim here, as with porn, should be to regulate to the end of assuring what we could call “consensual communication.” That is, the only purpose of the regulation should be to block nonconsensual communication, and enable consensual communication. I don’t believe that purpose is valid in every speech context. But in this context — private e-mail, or blogs, with limited bandwidth resources, with the costs of the speech born by the listener — it is completely appropriate to regulate to enable individuals to block commercial communications that they don’t want to receive.

So how could that be done?

Today, the only modality that has any meaningful effect upon the supply of spam is code. Technologists have demonstrated extraordinary talent in devising techniques to block spam. These techniques are of two sorts — one which is triggered by the content of the message, and one which is triggered by the behavior of the sender.

The technique that is focused upon content is an array of filtering technologies designed to figure out what the meaning of the message is. As Jonathan Zdziarski describes, these techniques have improved dramatically. While early heuristic filtering techniques had error rates around 1 in 10, current Bayesian techniques promise up to 99.5% – 99.95% accuracy[58].

But the single most important problem with these techniques is the arms race that they produce[59]. Spammers have access to the same filters that network administrators use to block spam — at least if the filters are heuristic[60]. They can therefore play with the message content until it can defeat the filter. That then requires filter writers to change the filters. Some do it well; some don’t. The consequence is that the filters are often over and under inclusive — blocking much more than they should or not blocking enough.

The second code-based technique for blocking spam focuses upon the e-mail practices of the sender — meaning not the person sending the e-mail, but the “server” that is forwarding the message to the recipient. A large number of network vigilantes — by which I mean people acting for the good in the world without legal regulation — have established lists of good and bad e-mail servers. These blacklists are compiled by examining the apparent rules the e-mail server uses in deciding whether to send e-mail. Those servers that don’t obey the vigilante’s rules end up on a blacklist, and people subscribing to these blacklists then block any e-mail from those servers.

This system would be fantastic if there were agreement about how best to avoid “misuse” of servers. But there isn’t any such agreement. There are instead good faith differences among good people about how best to control spam[61]. These differences, however, get quashed by the power of the boycott. Indeed, in a network, a boycott is especially powerful. If 5 out of 100 recipients of your e-mail can’t receive it because of the rules your network administrator adopts for your e-mail server, you can be sure the server’s rules — however sensible — will be changed. And often, there’s no appeal of the decision to be included on a blacklist. Like the private filtering technologies for porn, there’s no likely legal remedy for wrongful inclusion on a blacklist. So many types of e-mail services can’t effectively function because they don’t obey the rules of the blacklists.

Now if either or both of these techniques were actually working to stop spam, I would accept them. I’m particularly troubled by the process-less blocking of blacklists, and I have personally suffered significant embarrassment and costs when e-mail that wasn’t spam was treated as spam. Yet these costs might be acceptable if the system in general worked.

But it doesn’t. The quantity of spam continues to increase. The Raducatu Group “predicts that by 2007, 70% of all e-mail will be spam”[62]. And while there is evidence that the rate of growth in spam is slowing, there’s no good evidence the pollution of spam is abating[63]. The only federal legislative response, the CAN-SPAM Act, while preempting many innovative state solutions, is not having any significant effect [64].

Not only are these techniques not blocking spam, they are also blocking legitimate bulk e-mail that isn’t — at least from my perspective[65] — spam. The most important example is political e-mail. One great virtue of e-mail was that it would lower the costs of social and political communication. That in turn would widen the opportunity for political speech. But spam-blocking technologies have now emerged as a tax on these important forms of social speech. They have effectively removed

Вы читаете Code 2.0
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×