Gerbner called this the mean world syndrome: If you grow up in a home where there’s more than, say, three hours of television per day, for all practical purposes, you live in a meaner world—and act accordingly—than your next-door neighbor who lives in the same place but watches less television. “You know, who tells the stories of a culture really governs human behavior,” Gerbner later said.

Gerbner died in 2005, but he lived long enough to see the Internet begin to break that stranglehold. It must have been a relief: Although our online cultural storytellers are still quite consolidated, the Internet at least offers more choice. If you want to get your local news from a blogger rather than a local TV station that trumpets crime rates to get ratings, you can.

But if the mean world syndrome poses less of a risk these days, there’s a new problem on the horizon: We may now face what persuasion-profiling theorist Dean Eckles calls a friendly world syndrome, in which some of the biggest and most important problems fail to reach our view at all.

While the mean world on television arises from a cynical “if it bleeds, it leads” approach to programming, the friendly world generated by algorithmic filtering may not be as intentional. According to Facebook engineer Andrew Bosworth, the team that developed the Like button originally considered a number of options—from stars to a thumbs up sign (but in Iran and Thailand, it’s an obscene gesture). For a month in the summer of 2007, the button was known as the Awesome button. Eventually, however, the Facebook team gravitated toward Like, which is more universal.

That Facebook chose Like instead of, say, Important is a small design decision with far-reaching consequences: The stories that get the most attention on Facebook are the stories that get the most Likes, and the stories that get the most Likes are, well, more likable.

Facebook is hardly the only filtering service that will tend toward an antiseptically friendly world. As Eckles pointed out to me, even Twitter, which has a reputation for putting filtering in the hands of its users, has this tendency. Twitter users see most of the tweets of the folks they follow, but if my friend is having an exchange with someone I don’t follow, it doesn’t show up. The intent is entirely innocuous: Twitter is trying not to inundate me with conversations I’m not interested in. But the result is that conversations between my friends (who will tend to be like me) are overrepresented, while conversations that could introduce me to new ideas are obscured.

Of course, friendly doesn’t describe all of the stories that pierce the filter bubble and shape our sense of the political world. As a progressive political news junkie, I get plenty of news about Sarah Palin and Glenn Beck. The valence of this news, however, is very predictable: People are posting it to signal their dismay with Beck’s and Palin’s rhetoric and to build a sense of solidarity with their friends, who presumably feel the same way. It’s rare that my assumptions about the world are shaken by what I see in my news feed.

Emotional stories are the ones that generally thrive in the filter bubble. The Wharton School study on the New York Times’s Most Forwarded List, discussed in chapter 2, found that stories that aroused strong feelings—awe, anxiety, anger, happiness—were much more likely to be shared. If television gives us a “mean world,” filter bubbles give us an “emotional world.”

One of the troubling side effects of the friendly world syndrome is that some important public problems will disappear. Few people seek out information about homelessness, or share it, for that matter. In general, dry, complex, slow-moving problems—a lot of the truly significant issues—won’t make the cut. And while we used to rely on human editors to spotlight these crucial problems, their influence is now waning.

Even advertising isn’t necessarily a foolproof way of alerting people to public problems, as the environmental group Oceana found out. In 2004, Oceana was running a campaign urging Royal Caribbean to stop dumping its raw sewage into the sea; as part of the campaign, it took out a Google ad that said “Help us protect the world’s oceans. Join the fight!” After two days, Google pulled the ads, citing “language advocating against the cruise line industry” that was in violation of their general guidelines about taste. Apparently, advertisers that implicated corporations in public issues weren’t welcome.

The filter bubble will often block out the things in our society that are important but complex or unpleasant. It renders them invisible. And it’s not just the issues that disappear. Increasingly, it’s the whole political process.

The Invisible Campaign

When George W. Bush came out of the 2000 election with far fewer votes than Karl Rove expected, Rove set in motion a series of experiments in microtargeted media in Georgia—looking at a wide range of consumer data (“Do you prefer beer or wine?”) to try to predict voting behavior and identify who was persuadable and who could be easily motivated to get to the polls. Though the findings are still secret, legend has it that the methods Rove discovered were at the heart of the GOP’s successful get-out-the-vote strategy in 2002 and 2004.

On the left, Catalist, a firm staffed by former Amazon engineers, has built a database of hundreds of millions of voter profiles. For a fee, organizing and activist groups (including MoveOn) query it to help determine which doors to knock on and to whom to run ads. And that’s just the start. In a memo for fellow progressives, Mark Steitz, one of the primary Democratic data gurus, recently wrote that “targeting too often returns to a bombing metaphor—dropping message from planes. Yet the best data tools help build relationships based on observed contacts with people. Someone at the door finds out someone is interested in education; we get back to that person and others like him or her with more information. Amazon’s recommendation engine is the direction we need to head.” The trend is clear: We’re moving from swing states to swing people.

Consider this scenario: It’s 2016, and the race is on for the presidency of the United States. Or is it?

It depends on who you are, really. If the data says you vote frequently and that you may have been a swing voter in the past, the race is a maelstrom. You’re besieged with ads, calls, and invitations from friends. If you vote intermittently, you get a lot of encouragement to get out to the polls.

But let’s say you’re more like an average American. You usually vote for candidates from one party. To the data crunchers from the opposing party, you don’t look particularly persuadable. And because you vote in presidential elections pretty regularly, you’re also not a target for “get out the vote” calls from your own. Though you make it to the polls as a matter of civic duty, you’re not that actively interested in politics. You’re more interested in, say, soccer and robots and curing cancer and what’s going on in the town where you live. Your personalized news feeds reflect those interests, not the news from the latest campaign stop.

In a filtered world, with candidates microtargeting the few persuadables, would you know that the campaign was happening at all?

Even if you visit a site that aims to cover the race for a general audience, it’ll be difficult to tell what’s going on. What is the campaign about? There is no general, top-line message, because the candidates aren’t appealing to a general public. Instead, there are a series of message fragments designed to penetrate personalized filters.

Google is preparing for this future. Even in 2010, it staffed a round-the-clock “war room” for political advertising, aiming to be able to quickly sign off on and activate new ads even in the wee hours of October nights. Yahoo is conducting a series of experiments to determine how to match the publicly available list of who voted in each district with the click signals and Web history data it picks up on its site. And data-aggregation firms like Rapleaf in San Francisco are trying to correlate Facebook social graph information with voting behavior—so that they can show you the political ad that best works for you based on the responses of your friends.

The impulse to talk to voters about the things they’re actually interested in isn’t a bad one—it’d be great if mere mention of the word politics didn’t cause so many eyes to glaze over. And certainly the Internet has unleashed the coordinated energy of a whole new generation of activists—it’s easier than ever to find people who share your political passions. But while it’s easier than ever to bring a group of people together, as personalization advances it’ll become harder for any given group to reach a broad audience. In some ways, personalization poses a threat to public life itself.

Because the state of the art in political advertising is half a decade behind the state of the art in commercial advertising, most of this change is still to come. But for starters, filter-bubble politics could effectively make even more of us into single issue voters. Like personalized media, personalized advertising is a two-way street: I may see an ad about, say, preserving the environment because I drive a Prius, but seeing the ad also makes me care more about preserving the environment. And if a congressional campaign can determine that this is the issue on

Вы читаете The Filter Bubble
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату