bubble, Tom Friedman declared that the Internet would “make us all next door neighbors.” In fact, this idea was the core of his thesis in The Lexus and the Olive Tree: “The Internet is going to be like a huge vise that takes the globalization system… and keeps tightening and tightening that system around everyone, in ways that will only make the world smaller and smaller and faster and faster with each passing day.”

Friedman seemed to have in mind a kind of global village in which kids in Africa and executives in New York would build a community together. But that’s not what’s happening: Our virtual next-door neighbors look more and more like our real-world neighbors, and our real-world neighbors look more and more like us. We’re getting a lot of bonding but very little bridging. And this is important because it’s bridging that creates our sense of the “public”—the space where we address the problems that transcend our niches and narrow self-interests.

We are predisposed to respond to a pretty narrow set of stimuli—if a piece of news is about sex, power, gossip, violence, celebrity, or humor, we are likely to read it first. This is the content that most easily makes it into the filter bubble. It’s easy to push “Like” and increase the visibility of a friend’s post about finishing a marathon or an instructional article about how to make onion soup. It’s harder to push the “Like” button on an article titled, “Darfur sees bloodiest month in two years.” In a personalized world, important but complex or unpleasant issues —the rising prison population, for example, or homelessness—are less likely to come to our attention at all.

As a consumer, it’s hard to argue with blotting out the irrelevant and unlikable. But what is good for consumers is not necessarily good for citizens. What I seem to like may not be what I actually want, let alone what I need to know to be an informed member of my community or country. “It’s a civic virtue to be exposed to things that appear to be outside your interest,” technology journalist Clive Thompson told me. “In a complex world, almost everything affects you—that closes the loop on pecuniary self-interest.” Cultural critic Lee Siegel puts it a different way: “Customers are always right, but people aren’t.”

THE STRUCTURE OF our media affects the character of our society. The printed word is conducive to democratic argument in a way that laboriously copied scrolls aren’t. Television had a profound effect on political life in the twentieth century—from the Kennedy assassination to 9/11—and it’s probably not a coincidence that a nation whose denizens spend thirty-six hours a week watching TV has less time for civic life.

The era of personalization is here, and it’s upending many of our predictions about what the Internet would do. The creators of the Internet envisioned something bigger and more important than a global system for sharing pictures of pets. The manifesto that helped launch the Electronic Frontier Foundation in the early nineties championed a “civilization of Mind in cyberspace”—a kind of worldwide metabrain. But personalized filters sever the synapses in that brain. Without knowing it, we may be giving ourselves a kind of global lobotomy instead.

From megacities to nanotech, we’re creating a global society whose complexity has passed the limits of individual comprehension. The problems we’ll face in the next twenty years—energy shortages, terrorism, climate change, and disease—are enormous in scope. They’re problems that we can only solve together.

Early Internet enthusiasts like Web creator Tim Berners-Lee hoped it would be a new platform for tackling those problems. I believe it still can be—and as you read on, I’ll explain how. But first we need to pull back the curtain—to understand the forces that are taking the Internet in its current, personalized direction. We need to lay bare the bugs in the code—and the coders—that brought personalization to us.

If “code is law,” as Larry Lessig famously declared, it’s important to understand what the new lawmakers are trying to do. We need to understand what the programmers at Google and Facebook believe in. We need to understand the economic and social forces that are driving personalization, some of which are inevitable and some of which are not. And we need to understand what all this means for our politics, our culture, and our future.

Without sitting down next to a friend, it’s hard to tell how the version of Google or Yahoo News that you’re seeing differs from anyone else’s. But because the filter bubble distorts our perception of what’s important, true, and real, it’s critically important to render it visible. That is what this book seeks to do.

1

The Race for Relevance

If you’re not paying for something, you’re not the customer; you’re the product being sold.

—Andrew Lewis, under the alias Blue_beetle, on the Web site MetaFilter

In the spring of 1994, Nicholas Negroponte sat writing and thinking. At the MIT Media Lab, Negroponte’s brainchild, young chip designers and virtual-reality artists and robot-wranglers were furiously at work building the toys and tools of the future. But Negroponte was mulling over a simpler problem, one that millions of people pondered every day: what to watch on TV.

By the mid-1990s, there were hundreds of channels streaming out live programming twenty-four hours a day, seven days a week. Most of the programming was horrendous and boring: infomercials for new kitchen gadgets, music videos for the latest one-hit-wonder band, cartoons, and celebrity news. For any given viewer, only a tiny percentage of it was likely to be interesting.

As the number of channels increased, the standard method of surfing through them was getting more and more hopeless. It’s one thing to search through five channels. It’s another to search through five hundred. And when the number hits five thousand—well, the method’s useless.

But Negroponte wasn’t worried. All was not lost: in fact, a solution was just around the corner. “The key to the future of television,” he wrote, “is to stop thinking about television as television,” and to start thinking about it as a device with embedded intelligence. What consumers needed was a remote control that controls itself, an intelligent automated helper that would learn what each viewer watches and capture the programs relevant to him or her. “Today’s TV set lets you control brightness, volume, and channel,” Negroponte typed. “Tomorrow’s will allow you to vary sex, violence, and political leaning.”

And why stop there? Negroponte imagined a future swarming with intelligent agents to help with problems like the TV one. Like a personal butler at a door, the agents would let in only your favorite shows and topics. “Imagine a future,” Negroponte wrote, “in which your interface agent can read every newswire and newspaper and catch every TV and radio broadcast on the planet, and then construct a personalized summary. This kind of newspaper is printed in an edition of one…. Call it the Daily Me.”

The more he thought about it, the more sense it made. The solution to the information overflow of the digital age was smart, personalized, embedded editors. In fact, these agents didn’t have to be limited to television; as he suggested to the editor of the new tech magazine Wired, “Intelligent agents are the unequivocal future of computing.”

In San Francisco, Jaron Lanier responded to this argument with dismay. Lanier was one of the creators of virtual reality; since the eighties, he’d been tinkering with how to bring computers and people together. But the talk of agents struck him as crazy. “What’s got into all of you?” he wrote in a missive to the “Wired-style community” on his Web site. “The idea of ‘intelligent agents’ is both wrong and evil…. The agent question looms as a deciding factor in whether [the Net] will be much better than TV, or much worse.”

Lanier was convinced that, because they’re not actually people, agents would force actual humans to interact with them in awkward and pixelated ways. “An agent’s model of what you are interested in will be a cartoon model, and you will see a cartoon version of the world through the agent’s eyes,” he wrote.

And there was another problem: The perfect agent would presumably screen out most or all advertising. But since online commerce was driven by advertising, it seemed unlikely that these companies would roll out agents who would do such violence to their bottom line. It was more likely, Lanier wrote, that these agents would have double loyalties—bribable agents. “It’s not clear who they’re working for.”

It was a clear and plangent plea. But though it stirred up some chatter in online newsgroups, it didn’t persuade the software giants of this early Internet era. They were convinced by Negroponte’s logic: The company that figured out how to sift through the digital haystack for the nuggets of gold would win the future. They could see the attention crash coming, as the information options available to each person rose toward infinity. If you

Вы читаете The Filter Bubble
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×