incidentally) is often used to refer to malicious xenophobic patterns that aren’t true—“people of this skin color are less intelligent” is a classic example. But stereotypes and the negative consequences that flow from them aren’t fair to specific people even if they’re generally pretty accurate.

Marketers are already exploring the gray area between what can be predicted and what predictions are fair. According to Charlie Stryker, an old hand in the behavioral targeting industry who spoke at the Social Graph Symposium, the U.S. Army has had terrific success using social-graph data to recruit for the military—after all, if six of your Facebook buddies have enlisted, it’s likely that you would consider doing so too. Drawing inferences based on what people like you or people linked to you do is pretty good business. And it’s not just the army. Banks are beginning to use social data to decide to whom to offer loans: If your friends don’t pay on time, it’s likely that you’ll be a deadbeat too. “A decision is going to be made on creditworthiness based on the creditworthiness of your friends,” Stryker said. “There are applications of this technology that can be very powerful,” another social targeting entrepreneur told the Wall Street Journal. “Who knows how far we’d take it?”

Part of what’s troubling about this world is that companies aren’t required to explain on what basis they’re making these decisions. And as a result, you can get judged without knowing it and without being able to appeal. For example, LinkedIn, the social job-hunting site, offers a career trajectory prediction site; by comparing your resume to other peoples’ who are in your field but further along, LinkedIn can forecast where you’ll be in five years. Engineers at the company hope that soon it’ll be able to pinpoint career choices that lead to better outcomes—“mid-level IT professionals like you who attended Wharton business school made $25,000/year more than those who didn’t.” As a service to customers, it’s pretty useful. But imagine if LinkedIn provided that data to corporate clients to help them weed out people who are forecast to be losers. Because that could happen entirely without your knowledge, you’d never get the chance to argue, to prove the prediction wrong, to have the benefit of the doubt.

If it seems unfair for banks to discriminate against you because your high school buddy is bad at paying his bills or because you like something that a lot of loan defaulters also like, well, it is. And it points to a basic problem with induction, the logical method by which algorithms use data to make predictions.

Philosophers have been wrestling with this problem since long before there were computers to induce with. While you can prove the truth of a mathematical proof by arguing it out from first principles, the philosopher David Hume pointed out in 1772 that reality doesn’t work that way. As the investment cliche has it, past performance is not indicative of future results.

This raises some big questions for science, which is at its core a method for using data to predict the future. Karl Popper, one of the preeminent philosophers of science, made it his life’s mission to try to sort out the problem of induction, as it came to be known. While the optimistic thinkers of the late 1800s looked at the history of science and saw a journey toward truth, Popper preferred to focus on the wreckage along the side of the road—the abundance of failed theories and ideas that were perfectly consistent with the scientific method and yet horribly wrong. After all, the Ptolemaic universe, with the earth in the center and the sun and planets revolving around it, survived an awful lot of mathematical scrutiny and scientific observation.

Popper posed his problem in a slightly different way: Just because you’ve only ever seen white swans doesn’t mean that all swans are white. What you have to look for is the black swan, the counterexample that proves the theory wrong. “Falsifiability,” Popper argued, was the key to the search for truth: The purpose of science, for Popper, was to advance the biggest claims for which one could not find any countervailing examples, any black swans. Underlying Popper’s view was a deep humility about scientifically induced knowledge—a sense that we’re wrong as often as we’re right, and we usually don’t know when we are.

It’s this humility that many algorithmic prediction methods fail to build in. Sure, they encounter people or behaviors that don’t fit the mold from time to time, but these aberrations don’t fundamentally compromise their algorithms. After all, the advertisers whose money drives these systems don’t need the models to be perfect. They’re most interested in hitting demographics, not complex human beings.

When you model the weather and predict there’s a 70 percent chance of rain, it doesn’t affect the rain clouds. It either rains or it doesn’t. But when you predict that because my friends are untrustworthy, there’s a 70 percent chance that I’ll default on my loan, there are consequences if you get me wrong. You’re discriminating.

The best way to avoid overfitting, as Popper suggests, is to try to prove the model wrong and to build algorithms that give the benefit of the doubt. If Netflix shows me a romantic comedy and I like it, it’ll show me another one and begin to think of me as a romantic-comedy lover. But if it wants to get a good picture of who I really am, it should be constantly testing the hypothesis by showing me Blade Runner in an attempt to prove it wrong. Otherwise, I end up caught in a local maximum populated by Hugh Grant and Julia Roberts.

The statistical models that make up the filter bubble write off the outliers. But in human life it’s the outliers who make things interesting and give us inspiration. And it’s the outliers who are the first signs of change.

One of the best critiques of algorithmic prediction comes, remarkably, from the late-nineteenth-century Russian novelist Fyodor Dostoyevsky, whose Notes from Underground was a passionate critique of the utopian scientific rationalism of the day. Dostoyevsky looked at the regimented, ordered human life that science promised and predicted a banal future. “All human actions,” the novel’s unnamed narrator grumbles, “will then, of course, be tabulated according to these laws, mathematically, like tables of logarithms up to 108,000, and entered in an index… in which everything will be so clearly calculated and explained that there will be no more incidents or adventures in the world.”

The world often follows predictable rules and falls into predictable patterns: Tides rise and fall, eclipses approach and pass; even the weather is more and more predictable. But when this way of thinking is applied to human behavior, it can be dangerous, for the simple reason that our best moments are often the most unpredictable ones. An entirely predictable life isn’t worth living. But algorithmic induction can lead to a kind of information determinism, in which our past clickstreams entirely decide our future. If we don’t erase our Web histories, in other words, we may be doomed to repeat them.

5

The Public Is Irrelevant

The presence of others who see what we see and hear what we hear assures us of the reality of the world and ourselves.

—Hannah Arendt

It is an axiom of political science in the United States that the only way to neutralize the influence of the newspapers is to multiply their number.

—Alexis de Tocqueville

On the night of May 7, 1999, a B-2 stealth bomber left Whiteman Air Force Base in Missouri. The aircraft flew on an easterly course until it reached the city of Belgrade in Serbia, where a civil war was under way. Around midnight local time, the bomber delivered its cargo: four GPSGUIDED bombs, into which had been programmed an address that CIA documents identified as a possible arms warehouse. In fact, the address was the Yugoslavian Chinese Embassy. The building was demolished, and three Chinese diplomats were killed.

The United States immediately apologized, calling the event an accident. On Chinese state TV, however, an official statement called the bombing a “barbaric attack and a gross violation of Chinese sovereignty.” Though President Bill Clinton tried to reach Chinese President Jiang Zemin, Zemin repeatedly rejected his calls; Clinton’s

Вы читаете The Filter Bubble
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату