Yet in June 2003, the Supreme Court enjoined enforcement of the statute[43].

Both statutes respond to a legitimate and important concern. Parents certainly have the right to protect their kids from this form of speech, and it is perfectly understandable that Congress would want to help parents secure this protection.

But both statutes by Congress are unconstitutional — not, as some suggest, because there is no way that Congress could help parents. Instead both are unconstitutional because the particular way that Congress has tried to help parents puts more of a burden on legitimate speech (for adults that is) than is necessary.

In my view, however, there is a perfectly constitutional statute that Congress could pass that would have an important effect on protecting kids from porn.

To see what that statute looks like, we need to step back a bit from the CDA and COPA to identify what the legitimate objectives of this speech regulation would be.

Ginsberg[44] established that there is a class of speech that adults have a right to but that children do not. States can regulate that class to ensure that such speech is channeled to the proper user and blocked from the improper user.

Conceptually, for such a regulation can work, two questions must be answered:

Is the speaker uttering “regulable” speech — meaning speech “harmful to minors”?

Is the listener entitled to consume this speech — meaning is he a minor?

And with the answers to these questions, the logic of this regulation is:

IF

(speech == regulable)

AND

(listener == minor)

THEN

block access.

Now between the listener and the speaker, clearly the speaker is in a better position to answer question #1. The listener can’t know whether the speech is harmful to minors until the listener encounters the speech. If the listener is a minor, then it is too late. And between the listener and the speaker, clearly the listener is in a better position to answer question #2. On the Internet especially, it is extremely burdensome for the speaker to certify the age of the listener. It is the listener who knows his age most cheaply.

The CDA and COPA placed the burden of answering question #1 on the speaker, and #2 on both the speaker and the listener. A speaker had to determine whether his speech was regulable, and a speaker and a listener had to cooperate to verify the age of the listener. If the speaker didn’t, and the listener was a minor, then the speaker was guilty of a felony.

Real-space law also assigns the burden in exactly the same way. If you want to sell porn in New York, you both need to determine whether the content you’re selling is “harmful to minors”, and you need to determine whether the person you’re selling to is a minor. But real space is importantly different from cyberspace, at least in the high cost of answering question #2: In real space, the answer is almost automatic (again, it’s hard for a kid to hide that he’s a kid). And where the answer is not automatic, there’s a cheap system of identification (a driver’s license, for example). But in cyberspace, any mandatory system of identification constitutes a burden both for the speaker and the listener. Even under COPA, a speaker has to bear the burden of a credit card system, and the listener has to trust a pornographer with his credit card just to get access to constitutionally protected speech.

There’s another feature of the CDA/COPA laws that seems necessary but isn’t: They both place the burden of their regulation upon everyone, including those who have a constitutional right to listen. They require, that is, everyone to show an ID when it is only kids who can constitutionally be blocked.

So compare then the burdens of the CDA/COPA to a different regulatory scheme: one that placed the burden of question #1 (whether the content is harmful to minors) on the speaker and placed the burden of question #2 (whether the listener is a minor) on the listener.

One version of this scheme is simple, obviously ineffective and unfair to the speaker: A requirement that a website blocks access with a page that says “The content on this page is harmful to minors. Click here if you are a minor.” This scheme places the burden of age identification on the kid. But obviously, it would have zero effect in actually blocking a kid. And, less obviously, this scheme would be unfair to speakers. A speaker may well have content that constitutes material “harmful to minors”, but not everyone who offers such material should be labeled a pornographer. This transparent block is stigmatizing to some, and if a less burdensome system were possible, that stigma should also render regulation supporting this unconstitutional.

So what’s an alternative for this scheme that might actually work?

I’m going to demonstrate such a system with a particular example. Once you see the example, the general point will be easier to see as well.

Everyone knows the Apple Macintosh. It, like every modern operating system, now allows users to specify “accounts” on a particular machine. I’ve set one up for my son, Willem (he’s only three, but I want to be prepared). When I set up Willem’s account, I set it up with “parental controls.” That means I get to specify precisely what programs he gets to use, and what access he has to the Internet. The “parental controls” make it (effectively) impossible to change these specifications. You need the administrator’s password to do that, and if that’s kept secret, then the universe the kid gets to through the computer is the universe defined by the access the parent selects.

Imagine one of the programs I could select was a browser with a function we could call “kids-mode- browsing” (KMB). That browser would be programmed to watch on any web page for a particular mark. Let’s call that mark the “harmful to minors” mark, or <H2M> for short. That mark, or in the language of the Web, tag, would bracket any content the speaker believes is harmful to minors, and the KMB browser would then not display any content bracketed with this <H2M> tag. So, for example, a web page marked up “Blah blah blah <H2M>block this</H2M> blah blah blah” would appear on a KMB screen as: “Blah blah blah blah blah blah.”

So, if the world of the World Wide Web was marked with <H2M> tags, and if browser manufacturers built this <H2M>-filtering function into their browsers, then parents would be able to configure their machines so their kids didn’t get access to any content marked <H2M>. The policy objective of enabling parental control would be achieved with a minimal burden on constitutionally entitled speakers.

How can we get (much of the) world of the Web to mark its harmful to minors content with <H2M> tags?

This is the role for government. Unlike the CDA or COPA, the regulation required to make this system work — to the extent it works, and more on that below — is simply that speakers mark their content. Speakers would not be required to block access; speakers would not be required to verify age. All the speaker would be

Вы читаете Code 2.0
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×