videotaped apology to the Chinese people was barred from Chinese media for four days.

As anti-U.S. riots began to break out in the streets, China’s largest newspaper, the People’s Daily, created an online chat forum called the Anti-Bombing Forum. Already, in 1999, chat forums were huge in China—much larger than they’ve ever been in the United States. As New York Times journalist Tom Downey explained a few years later, “News sites and individual blogs aren’t nearly as influential in China, and social networking hasn’t really taken off. What remain most vital are the largely anonymous online forums… that are much more participatory, dynamic, populist and perhaps even democratic than anything on the English-language Internet.” Tech writer Clive Thompson quotes Shanthi Kalathil, a researcher at the Carnegie Endowment, who says that the Anti-Bombing Forum helped to legitimize the Chinese government’s position that the bombing was deliberate among “an elite, wired section of the population.” The forum was a form of crowd-sourced propaganda: Rather than just telling Chinese citizens what to think, it lifted the voices of thousands of patriots aligned with the state.

Most of the Western reporting on Chinese information management focuses on censorship: Google’s choice to remove, temporarily, search results for “Tiananmen Square,” or Microsoft’s decision to ban the word “democracy” from Chinese blog posts, or the Great Firewall that sits between China and the outside world and sifts through every packet of information that enters or exits the country. Censorship in China is real: There are plenty of words that have been more or less stricken from the public discourse. When Thompson asks whether the popular Alibaba engine would show results for dissident movements, CEO Jack Ma shook his head. “No! We are a business!” he said. “Shareholders want to make money. Shareholders want us to make the customer happy. Meanwhile we do not have any responsibilities saying we should do this or that political thing.”

In practice, the firewall is not so hard to circumvent. Corporate virtual private networks—Internet connections encrypted to prevent espionage—operate with impunity. Proxies and firewall workarounds like Tor connect in-country Chinese dissidents with even the most hard-core antigovernment Web sites. But to focus exclusively on the firewall’s inability to perfectly block information is to miss the point. China’s objective isn’t so much to blot out unsavory information as to alter the physics around it—to create friction for problematic information and to route public attention to progovernment forums. While it can’t block all of the people from all of the news all of the time, it doesn’t need to.

“What the government cares about,” Atlantic journalist James Fallows writes, “is making the quest for information just enough of a nuisance that people generally won’t bother.” The strategy, says Xiao Qiang of the University of California at Berkeley, is “about social control, human surveillance, peer pressure, and self-censorship.” Because there’s no official list of blocked keywords or forbidden topics published by the government, businesses and individuals censor themselves to avoid a visit from the police. Which sites are available changes daily. And while some bloggers suggest that the system’s unreliability is a result of faulty technology (“the Internet will override attempts to control it!”), for the government this is a feature, not a bug. James Mulvenon, the head of the Center for Intelligence Research and Analysis, puts it this way: “There’s a randomness to their enforcement, and that creates a sense that they’re looking at everything.”

Lest that sensation be too subtle, the Public Security Bureau in Shenzhen, China, developed a more direct approach: Jingjing and Chacha, the cartoon Internet Police. As the director of the initiative told the China Digital Times, he wanted “to let all Internet users know that the Internet is not a place beyond law [and that] the Internet Police will maintain order in all online behavior.” Icons of the male-female pair, complete with jaunty flying epaulets and smart black shoes, were placed on all major Web sites in Shenzhen; they even had instant-message addresses so that six police officers could field questions from the online crowds.

“People are actually quite free to talk about [democracy],” Google’s China point man, Kai-Fu Lee, told Thompson in 2006. “I don’t think they care that much. Hey, U.S. democracy, that’s a good form of government. Chinese government, good and stable, that’s a good form of government. Whatever, as long as I get to go to my favorite Web site, see my friends, live happily.” It may not be a coincidence that the Great Firewall stopped blocking pornography recently. “Maybe they are thinking that if Internet users have some porn to look at, then they won’t pay so much attention to political matters,” Michael Anti, a Beijing-based analyst, told the AP.

We usually think about censorship as a process by which governments alter facts and content. When the Internet came along, many hoped it would eliminate censorship altogether—the flow of information would simply be too swift and strong for governments to control. “There’s no question China has been trying to crack down on the Internet,” Bill Clinton told the audience at a March 2000 speech at Johns Hopkins University. “Good luck! That’s sort of like trying to nail Jell-O to the wall.”

But in the age of the Internet, it’s still possible for governments to manipulate the truth. The process has just taken a different shape: Rather than simply banning certain words or opinions outright, it’ll increasingly revolve around second-order censorship—the manipulation of curation, context, and the flow of information and attention. And because the filter bubble is primarily controlled by a few centralized companies, it’s not as difficult to adjust this flow on an individual-by-individual basis as you might think. Rather than decentralizing power, as its early proponents predicted, in some ways the Internet is concentrating it.

Lords of the Cloud

To get a sense of how personalization might be used for political ends, I talked to a man named John Rendon.

Rendon affably describes himself as an “information warrior and perception manager.” From the Rendon Group’s headquarters in Washington, D.C.’s, Dupont Circle, he provides those services to dozens of U.S. agencies and foreign governments. When American troops rolled into Kuwait City during the first Iraq war, television cameras captured hundreds of Kuwaitis joyfully waving American flags. “Did you ever stop to wonder,” he asked an audience later, “how the people of Kuwait City, after being held hostage for seven long and painful months, were able to get handheld American flags? And for that matter, the flags of other coalition countries? Well, you now know the answer. That was one of my jobs.”

Much of Rendon’s work is confidential—he enjoys a level of beyond–Top Secret clearance that even high- level intelligence analysts sometimes fail to get. His role in George W. Bush–era pro-U.S. propaganda in Iraq is unclear: While some sources claim he was a central figure in the effort, Rendon denies any involvement. But his dream is quite clear: Rendon wants to see a world where television “can drive the policy process,” where “border patrols [are] replaced by beaming patrols,” and where “you can win without fighting.”

Given all that, I was a bit surprised when the first weapon he referred me to was a very quotidian one: a thesaurus. The key to changing public opinion, Rendon said, is finding different ways to say the same thing. He described a matrix, with extreme language or opinion on one side and mild opinion on the other. By using sentiment analysis to figure out how people in a country felt about an event—say, a new arms deal with the United States—and identify the right synonyms to move them toward approval, you could “gradually nudge a debate.” “It’s a lot easier to be close to what reality is” and push it in the right direction, he said, than to make up a new reality entirely.

Rendon had seen me talk about personalization at an event we both attended. Filter bubbles, he told me, provided new ways of managing perceptions. “It begins with getting inside the algorithm. If you could find a way to load your content up so that only your content gets pulled by the stalking algorithm, then you’d have a better chance of shaping belief sets,” he said. In fact, he suggested, if we looked in the right places, we might be able to see traces of this kind of thing happening now—sentiment being algorithmically shifted over time.

But if the filter bubble might make shifting perspectives easier in a future Iraq or Panama, Rendon was clearly concerned about the impact of self-sorting and personalized filtering for democracy at home. “If I’m taking a photo of a tree,” he said, “I need to know what season we’re in. Every season it looks different. It could be dying, or just losing its leaves in autumn.” To make good decisions, context is crucial—that’s why the military is so focused on what they call “360-degree situational awareness.” In the filter bubble, you don’t get 360 degrees—and you might not get more than one.

I returned to the question about using algorithms to shift sentiment. “How does someone game the system when it’s all about self-generated, self-reinforcing information flows? I have to think about it more,” Rendon said,

Вы читаете The Filter Bubble
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату