required to do is to tag content deemed harmful to minors with the proper tag.

This tag, moreover, would not be a public marking that a website was a porn site. This proposal is not like the (idiotic, imho) proposals that we create a .sex or .xxx domain for the Internet. People shouldn’t have to locate to a red-light district just to have adult material on their site. The <H2M> tag instead would be hidden from the ordinary user — unless that user looks for it, or wants to block that content him or herself.

Once the government enacts this law, then browser manufacturers would have an incentive to build this (very simple) filtering technology into their browsers. Indeed, given the open-source Mozilla browser technology — to which anyone could add anything they wanted — the costs of building this modified browser are extremely low. And once the government enacts this law, and browser manufacturers build a browser that recognizes this tag, then parents have would have as strong a reason to adopt platforms that enable them to control where their kids go on the Internet.

Thus, in this solution, the LAW creates an incentive (through penalties for noncompliance) for sites with “harmful to minors” material to change their ARCHITECTURE (by adding <H2M> tags) which creates a MARKET for browser manufacturers (new markets) to add filtering to their code, so that parents can protect their kids. The only burden created by this solution is on the speaker; this solution does not burden the rightful consumer of porn at all. To that consumer, there is no change in the way the Web is experienced, because without a browser that looks for the <H2M> tag, the tag is invisible to the consumer.

But isn’t that burden on the speaker unconstitutional? It’s hard to see why it would be, if it is constitutional in real space to tell a speaker he must filter kids from his content “harmful to minors.” No doubt there’s a burden. But the question isn’t whether there’s a burden. The constitutional question is whether there is a less burdensome way to achieve this important state interest.

But what about foreign sites? Americans can’t regulate what happens in Russia. Actually, that’s less true than you think. As we’ll see in the next chapter, there’s much that the U.S. government can do and does to effectively control what other countries do.

Still, you might worry that sites in other countries won’t obey American law because it’s not likely we’ll send in the Marines to take out a noncomplying website. That’s certainly true. But to the extent that a parent is concerned about this, as I already described, there is a market already to enable geographic filtering of content. The same browser that filters on <H2M> could in principle subscribe to an IP mapping service to enable access to American sites only.

But won’t kids get around this restriction? Sure, of course some will. But the measure of success for legislation (as opposed to missile tracking software) is not 100 percent. The question the legislature asks is whether the law will make things better off[45]. To substantially block access to <H2M> content would be a significant improvement, and that would be enough to make the law make sense.

But why not simply rely upon filters that parents and libraries install on their computers? Voluntary filters don’t require any new laws, and they therefore don’t require any state-sponsored censorship to achieve their ends.

It is this view that I want to work hardest to dislodge, because built within it are all the mistakes that a pre-cyberlaw understanding brings to the question of regulation in cyberspace.

First, consider the word “censorship.” What this regulation would do is give parents the opportunity to exercise an important choice. Enabling parents to do this has been deemed a compelling state interest. The kids who can’t get access to this content because their parents exercised this choice might call it “censorship”, but that isn’t a very useful application of the term. If there is a legitimate reason to block this form of access, that’s speech regulation. There’s no reason to call it names.

Second, consider the preference for “voluntary filters.” If voluntary filters were to achieve the very same end (blocking H2M speech and only H2M speech), I’d be all for them. But they don’t. As the ACLU quite powerfully described (shortly after winning the case that struck down the CDA partly on the grounds that private filters were a less restrictive means than government regulation):

The ashes of the CDA were barely smoldering when the White House called a summit meeting to encourage Internet users to self-rate their speech and to urge industry leaders to develop and deploy the tools for blocking “inappropriate speech.” The meeting was “voluntary”, of course: the White House claimed it wasn’t holding anyone’s feet to the fire. But the ACLU and others . . . were genuinely alarmed by the tenor of the White House summit and the unabashed enthusiasm for technological fixes that will make it easier to block or render invisible controversial speech. . . . It was not any one proposal or announcement that caused our alarm; rather, it was the failure to examine the longer-term implications for the Internet of rating and blocking schemes[46].

The ACLU’s concern is the obvious one: The filters that the market has created not only filter much more broadly than the legitimate interest the state has here — blocking <H2M> speech — they also do so in a totally nontransparent way. There have been many horror stories of sites being included in filters for all the wrong reasons (including for simply criticizing the filter)[47]. And when you are wrongfully blocked by a filter, there’s not much you can do. The filter is just a particularly effective recommendation list. You can’t sue Zagat’s just because they steer customers to your competitors.

My point is not that we should ban filters, or that parents shouldn’t be allowed to block more than H2M speech. My point is that if we rely upon private action alone, more speech will be blocked than if the government acted wisely and efficiently.

And that frames my final criticism: As I’ve argued from the start, our focus should be on the liberty to speak, not just on the government’s role in restricting speech. Thus, between two “solutions” to a particular speech problem, one that involves the government and suppresses speech narrowly, and one that doesn’t involve the government but suppresses speech broadly, constitutional values should tilt us to favor the former. First Amendment values (even if not the First Amendment directly) should lead to favoring a speech regulation system that is thin and accountable, and in which the government’s action or inaction leads only to the suppression of speech the government has a legitimate interest in suppressing. Or, put differently, the fact that the government is involved should not necessarily disqualify a solution as a proper, rights-protective solution.

The private filters the market has produced so far are both expensive and over-inclusive. They block content that is beyond the state’s interest in regulating speech. They are effectively subsidized because there is no less restrictive alternative.

Publicly required filters (which are what the <H2M> tag effectively enables) are narrowly targeted on the legitimate state interest. And if there is a dispute about that tag — if for example, a prosecutor says a website with information about breast cancer must tag the information with an <H2M> tag — then the website at least has the opportunity to fight that. If that filtering were in private software, there would be no opportunity to

Вы читаете Code 2.0
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×