basis. A visitor to a personalized news site could be given the option of seeing how many other visitors were seeing which articles—even perhaps a colorcoded visual map of the areas of commonality and divergence. Of course, this requires admitting to the user that personalization is happening in the first place, and there are strong reasons in some cases for businesses not to do so. But they’re mostly commercial reasons, not ethical ones.

The Interactive Advertising Bureau is already pushing in this direction. An industry trade group for the online advertising community, the IAB has concluded that unless personalized ads disclose to users how they’re personalized, consumers will get angry and demand federal regulation. So it’s encouraging its members to include a set of icons on every ad to indicate what personal data the ad draws on and how to change or opt out of this feature set. As content providers incorporate the personalization techniques pioneered by direct marketers and advertisers, they should consider incorporating these safeguards as well.

Even then, sunlight doesn’t solve the problem unless it’s coupled with a focus in these companies on optimizing for different variables: more serendipity, a more humanistic and nuanced sense of identity, and an active promotion of public issues and cultivation of citizenship.

As long as computers lack consciousness, empathy, and intelligence, much will be lost in the gap between our actual selves and the signals that can be rendered into personalized environments. And as I discussed in chapter 5, personalization algorithms can cause identity loops, in which what the code knows about you constructs your media environment, and your media environment helps to shape your future preferences. This is an avoidable problem, but it requires crafting an algorithm that prioritizes “falsifiability,” that is, an algorithm that aims to disprove its idea of who you are. (If Amazon harbors a hunch that you’re a crime novel reader, for example, it could actively present you with choices from other genres to fill out its sense of who you are.)

Companies that hold great curatorial power also need to do more to cultivate public space and citizenship. To be fair, they’re already doing some of this: Visitors to Facebook on November 2, 2010, were greeted by a banner asking them to indicate if they’d voted. Those who had voted shared this news with their friends; because some people vote because of social pressure, it’s quite possible that Facebook increased the number of voters. Likewise, Google has been doing strong work to make information about polling locations more open and easily available, and featured its tool on its home page on the same day. Whether or not this is profit-seeking behavior (a “find your polling place” feature would presumably be a terrific place for political advertising), both projects drew the attention of users toward political engagement and citizenship.

A number of the engineers and technology journalists I talked to raised their eyebrows when I asked them if personalizing algorithms could do a better job on this front. After all, one said, who’s to say what’s important? For Google engineers to place a value on some kinds of information over others, another suggested, would be unethical—though of course this is precisely what the engineers themselves do all the time.

To be clear, I don’t yearn to go back to the good old days when a small group of all-powerful editors unilaterally decided what was important. Too many actually important stories (the genocide in Rwanda, for example) fell through the cracks, while too many actually unimportant ones got front-page coverage. But I also don’t think we should jettison that approach altogether. Yahoo News suggests there is some possibility for middle ground: The team combines algorithmic personalization with old-school editorial leadership. Some stories are visible to everyone because they’re surpassingly important. Others show up for some users and not others. And while the editorial team at Yahoo spends a lot of time interpreting click data and watching which articles do well and which don’t, they’re not subservient to it. “Our editors think of the audience as people with interests, as opposed to a flood of directional data,” a Yahoo News employee told me. “As much as we love the data, it’s being filtered by human beings who are thinking about what the heck it means. Why didn’t the article on this topic we think is important for our readers to know about do better? How do we help it find a larger audience?”

And then there are fully algorithmic solutions. For example, why not rely on everyone’s idea of what’s important? Imagine for a moment that next to each Like button on Facebook was an Important button. You could tag items with one or the other or both. And Facebook could draw on a mix of both signals—what people like, and what they think really matters—to populate and personalize your news feed. You’d have to bet that news about Pakistan would be seen more often—even accounting for everyone’s quite subjective definition of what really matters. Collaborative filtering doesn’t have to lead to compulsive media: The whole game is in what values the filters seek to pull out. Alternately, Google or Facebook could place a slider bar running from “only stuff I like” to “stuff other people like that I’ll probably hate” at the top of search results and the News Feed, allowing users to set their own balance between tight personalization and a more diverse information flow. This approach would have two benefits: It would make clear that there’s personalization going on, and it would place it more firmly in the user’s control.

There’s one more thing the engineers of the filter bubble can do. They can solve for serendipity, by designing filtering systems to expose people to topics outside their normal experience. This will often be in tension with pure optimization in the short term, because a personalization system with an element of randomness will (by definition) get fewer clicks. But as the problems of personalization become better known, it may be a good move in the long run—consumers may choose systems that are good at introducing them to new topics. Perhaps what we need is a kind of anti-Netflix Prize—a Serendipity Prize for systems that are the best at holding readers’ attention while introducing them to new topics and ideas.

If this shift toward corporate responsibility seems improbable, it’s not without precedent. In the mid-1800s, printing a newspaper was hardly a reputable business. Papers were fiercely partisan and recklessly ideological. They routinely altered facts to suit their owners’ vendettas of the day, or just to add color. It was this culture of crass commercialism and manipulation that Walter Lippmann railed against in Liberty and the News.

But as newspapers became highly profitable and highly important, they began to change. It became possible, in a few big cities, to run papers that weren’t just chasing scandal and sensation—in part, because their owners could afford not to. Courts started to recognize a public interest in journalism and rule accordingly. Consumers started to demand more scrupulous and rigorous editing.

Urged on by Lippmann’s writings, an editorial ethic began to take shape. It was never shared universally or followed as well as it could have been. It was always compromised by the business demands of newspapers’ owners and shareholders. It failed outright repeatedly—access to power brokers compromised truth telling, and the demands of advertisers overcame the demands of readers. But in the end, it succeeded, somehow, in seeing us through a century of turmoil.

The torch is now being passed to a new generation of curators, and we need them to pick it up and carry it with pride. We need programmers who will build public life and citizenship into the worlds they create. And we need users who will hold them to it when the pressure of monetization pulls them in a different direction.

What Governments and Citizens Can Do

There’s plenty that the companies that power the filter bubble can do to mitigate the negative consequences of personalization—the ideas above are just a start. But ultimately, some of these problems are too important to leave in the hands of private actors with profit-seeking motives. That’s where governments come in.

Ultimately, as Eric Schmidt told Stephen Colbert, Google is just a company. Even if there are ways of addressing these issues that don’t hurt the bottom line—which there may well be—doing so simply isn’t always going to be a top-level priority. As a result, after we’ve each done our part to pop the filter bubble, and after companies have done what they’re willing to do, there’s probably a need for government oversight to ensure that we control our online tools and not the other way around.

In his book Republic.com, Cass Sunstein suggested a kind of “fairness doctrine” for the Internet, in which information aggregators have to expose their audiences to both sides. Though he later changed his mind, the proposal suggests one direction for regulation: Just require curators to behave in a public-oriented way, exposing their readers to diverse lines of argument. I’m skeptical, for some of the same reasons Sunstein abandoned the idea: Curation is a nuanced, dynamic thing, an art as much as a science, and it’s hard to imagine how regulating editorial ethics wouldn’t inhibit a great deal of

Вы читаете The Filter Bubble
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату