more public. And because software architects clearly understand the power of the default and use it to make their services more profitable, their claim that users can opt out of giving their personal information seems somewhat disingenuous. With fewer rules and a more transparent system, there are fewer defaults to set.

Facebook’s PR department didn’t return my e-mails requesting an interview (perhaps because MoveOn’s critical view of Facebook’s privacy practices is well known). But it would probably argue that it gives its users far more choice and control about how they use the service than Twitter does. And it’s true that Facebook’s options control panel lists scores of different options for Facebook users.

But to give people control, you have to make clearly evident what the options are, because options largely exist only to the degree that they’re perceived. This is the problem many of us used to face in programming our VCRs: The devices had all sorts of functions, but figuring out how to make them do anything was an afternoon- long exercise in frustration. When it comes to important tasks like protecting your privacy and adjusting your filters online, saying that you can figure it out if you read the manual for long enough isn’t a sufficient answer.

In short, at the time of this writing, Twitter makes it pretty straightforward to manage your filter and understand what’s showing up and why, whereas Facebook makes it nearly impossible. All other things being equal, if you’re concerned about having control over your filter bubble, better to use services like Twitter than services like Facebook.

We live in an increasingly algorithmic society, where our public functions, from police databases to energy grids to schools, run on code. We need to recognize that societal values about justice, freedom, and opportunity are embedded in how code is written and what it solves for. Once we understand that, we can begin to figure out which variables we care about and imagine how we might solve for something different.

For example, advocates looking to solve the problem of political gerrymandering—the backroom process of carving up electoral districts to favor one party or another—have long suggested that we replace the politicians involved with software. It sounds pretty good: Start with some basic principles, input population data, and out pops a new political map. But it doesn’t necessarily solve the basic problem, because what the algorithm solves for has political consequences: Whether the software aims to group by cities or ethnic groups or natural boundaries can determine which party keeps its seats in Congress and which doesn’t. And if the public doesn’t pay close attention to what the algorithm is doing, it could have the opposite of the intended effect—sanctioning a partisan deal with the imprimatur of “neutral” code.

In other words, it’s becoming more important to develop a basic level of algorithmic literacy. Increasingly, citizens will have to pass judgment on programmed systems that affect our public and national life. And even if you’re not fluent enough to read through thousands of lines of code, the building-block concepts—how to wrangle variables, loops, and memory—can illuminate how these systems work and where they might make errors.

Especially at the beginning, learning the basics of programming is even more rewarding than learning a foreign language. With a few hours and a basic platform, you can have that “Hello, World!” experience and start to see your ideas come alive. And within a few weeks, you can be sharing these ideas with the whole Web. Mastery, as in any profession, takes much longer, but the payoff for a limited investment in coding is fairly large: It doesn’t take long to become literate enough to understand what most basic bits of code are doing.

Changing our own behavior is a part of the process of bursting the filter bubble. But it’s of limited use unless the companies that are propelling personalization forward change as well.

What Companies Can Do

It’s understandable that, given their meteoric rises, the Googles and Facebooks of the online world have been slow to realize their responsibilities. But it’s critical that they recognize their public responsibility soon. It’s no longer sufficient to say that the personalized Internet is just a function of relevance-seeking machines doing their job.

The new filterers can start by making their filtering systems more transparent to the public, so that it’s possible to have a discussion about how they’re exercising their responsibilities in the first place.

As Larry Lessig says, “A political response is possible only when regulation is transparent.” And there’s more than a little irony in the fact that companies whose public ideologies revolve around openness and transparency are so opaque themselves.

Facebook, Google, and their filtering brethren claim that to reveal anything about their algorithmic processes would be to give away business secrets. But that defense is less convincing than it sounds at first. Both companies’ primary advantage lies in the extraordinary number of people who trust them and use their services (remember lock-in?). According to Danny Sullivan’s Search Engine Land blog, Bing’s search results are “highly competitive” with Google’s, but it has a fraction of its more powerful rival’s users. It’s not a matter of math that keeps Google ahead, but the sheer number of people who use it every day. PageRank and the other major pieces of Google’s search engine are “actually one of the world’s worst kept secrets,” says Google fellow Amit Singhal.

Google has also argued that it needs to keep its search algorithm under tight wraps because if it was known it’d be easier to game. But open systems are harder to game than closed ones, precisely because everyone shares an interest in closing loopholes. The open-source operating system Linux, for example, is actually more secure and harder to penetrate with a virus than closed ones like Microsoft’s Windows or Apple’s OS X.

Whether or not it makes the filterers’ products more secure or efficient, keeping the code under tight wraps does do one thing: It shields these companies from accountability for the decisions they’re making, because the decisions are difficult to see from the outside. But even if full transparency proves impossible, it’s possible for these companies to shed more light on how they approach sorting and filtering problems.

For one thing, Google and Facebook and other new media giants could draw inspiration from the history of newspaper ombudsmen, which became a newsroom topic in the mid-1960s.

Philip Foisie, an executive at the Washington Post company, wrote one of the most memorable memos arguing for the practice. “It is not enough to say,” he suggested, “that our paper, as it appears each morning, is its own credo, that ultimately we are our own ombudsman. It has not proven to be, possibly cannot be. Even if it were, it would not be viewed as such. It is too much to ask the reader to believe that we are capable of being honest and objective about ourselves.” The Post found his argument compelling, and hired its first ombudsman in 1970.

“We know the media is a great dichotomy,” said the longtime Sacramento Bee ombudsman Arthur Nauman in a speech in 1994. On the one hand, he said, media has to operate as a successful business that provides a return on investment. “But on the other hand, it is a public trust, a kind of public utility. It is an institution invested with enormous power in the community, the power to affect thoughts and actions by the way it covers the news—the power to hurt or help the common good.” It is this spirit that the new media would do well to channel. Appointing an independent ombudsman and giving the world more insight into how the powerful filtering algorithms work would be an important first step.

Transparency doesn’t mean only that the guts of a system are available for public view. As the Twitter versus Facebook dichotomy demonstrates, it also means that individual users intuitively understand how the system works. And that’s a necessary precondition for people to control and use these tools—rather than having the tools control and use us.

To start with, we ought to be able to get a better sense of who these sites think we are. Google claims to make this possible with a “dashboard”—a single place to monitor and manage all of this data. In practice, its confusing and multitiered design makes it almost impossible for an average user to navigate and understand. Facebook, Amazon, and other companies don’t allow users to download a complete compilation of their data in the United States, though privacy laws in Europe force them to. It’s an entirely reasonable expectation that data that users provide to companies ought to be available to us, and that this expectation is one that, according to the University of California at Berkeley, most Americans share. We ought to be able to say, “You’re wrong. Perhaps I used to be a surfer, or a fan of comics, or a Democrat, but I’m not any more.”

Knowing what information the personalizers have on us isn’t enough. They also need to do a much better job explaining how they use the data—what bits of information are personalized, to what degree, and on what

Вы читаете The Filter Bubble
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату