There are a number of ways this can happen, and you can pick up a picture of them from a few famous psychology experiments into the phenomenon.
In one, subjects were read a list of male and female names, in equal number, and then asked at the end whether there were more men or women in the list: when the men in the list had names like Ronald Reagan, but the women were unheard of, people tended to answer that there were more men than women; and vice versa.
Our attention is always drawn to the exceptional and the interesting, and if you have something to sell, it makes sense to guide people’s attention to the features you most want them to notice. When fruit machines pay up, they make a theatrical ‘kerchunk-kerchunk’ sound with every coin they spit out, so that everybody in the pub can hear it; but when you lose, they don’t draw attention to themselves. Lottery companies, similarly, do their absolute best to get their winners prominently into the media; but it goes without saying that you, as a lottery loser, have never had your outcome paraded for the TV cameras.
The anecdotal success stories about CAM – and the tragic anecdotes about the MMR vaccine – are disproportionately misleading, not just because the statistical context is missing, but because of their ‘high availability’: they are dramatic, associated with strong emotion, and amenable to strong visual imagery. They are concrete and memorable, rather than abstract. No matter what you do with statistics about risk or recovery, your numbers will always have inherently low psychological availability, unlike miracle cures, scare stories, and distressed parents.
It’s because of ‘availability’, and our vulnerability to drama, that people are more afraid of sharks at the beach, or of fairground rides on the pier, than they are of flying to Florida, or driving to the coast. This phenomenon is even demonstrated in patterns of smoking cessation amongst doctors: you’d imagine, since they are rational actors, that all doctors would simultaneously have seen sense and stopped smoking once they’d read the studies showing the phenomenally compelling relationship between cigarettes and lung cancer. These are men of applied science, after all, who are able, every day, to translate cold statistics into meaningful information and beating human hearts.
But in fact, from the start, doctors working in specialities like chest medicine and oncology – where they witnessed patients dying of lung cancer with their own eyes – were proportionately more likely to give up cigarettes than their colleagues in other specialities. Being shielded from the emotional immediacy and drama of consequences matters.
Social influences
Last in our whistle-stop tour of irrationality comes our most self-evident flaw. It feels almost too obvious to mention, but our values are socially reinforced by conformity and by the company we keep. We are selectively exposed to information that revalidates our beliefs, partly because we expose ourselves to
It’s easy to forget the phenomenal impact of conformity. You doubtless think of yourself as a fairly independent-minded person, and you know what you think. I would suggest that the same beliefs were held by the subjects of Asch’s experiments into social conformity. These subjects were placed near one end of a line of actors who presented themselves as fellow experimental subjects, but were actually in cahoots with the experimenters. Cards were held up with one line marked on them, and then another card was held up with three lines of different lengths: six inches, eight inches, ten inches.
Everyone called out in turn which line on the second card was the same length as the line on the first. For six of the eighteen pairs of cards the accomplices gave the correct answer; but for the other twelve they called out the wrong answer. In all but a quarter of the cases, the experimental subjects went along with the incorrect answer from the crowd of accomplices on one or more occasions, defying the clear evidence of their own senses.
That’s an extreme example of conformity, but the phenomenon is all around us. ‘Communal reinforcement’ is the process by which a claim becomes a strong belief, through repeated assertion by members of a community. The process is independent of whether the claim has been properly researched, or is supported by empirical data significant enough to warrant belief by reasonable people.
Communal reinforcement goes a long way towards explaining how religious beliefs can be passed on in communities from generation to generation. It also explains how testimonials within communities of therapists, psychologists, celebrities, theologians, politicians, talk-show hosts, and so on, can supplant and become more powerful than scientific evidence.
When people learn no tools of judgement and merely follow their hopes, the seeds of political manipulation are sown.
There are many other well-researched areas of bias. We have a disproportionately high opinion of ourselves, which is nice. A large majority of the public think they are more fair-minded, less prejudiced, more intelligent and more skilled at driving than the average person, when of course only half of us can be better than the median. Most of us exhibit something called ‘attributional bias’: we believe our successes are due to our own internal faculties, and our failures are due to external factors; whereas for others, we believe their successes are due to luck, and their failures to their own flaws. We can’t all be right.
Lastly, we use context and expectation to bias our appreciation of a situation – because, in fact, that’s the only way we can think. Artificial intelligence research has drawn a blank so far largely because of something called the ‘frame problem’: you can tell a computer how to process information, and give it all the information in the world, but as soon as you give it a real-world problem – a sentence to understand and respond to, for example – computers perform much worse than we might expect, because they don’t know what information is relevant to the problem. This is something humans are very good at – filtering irrelevant information – but that skill comes at a cost of ascribing disproportionate bias to some contextual data.
We tend to assume, for example, that positive characteristics cluster: people who are attractive must also be good; people who seem kind might also be intelligent and well-informed. Even this has been demonstrated experimentally: identical essays in neat handwriting score higher than messy ones; and the behaviour of sporting teams which wear black is rated as more aggressive and unfair than teams which wear white.
And no matter how hard you try, sometimes things just are very counterintuitive, especially in science. Imagine there are twenty-three people in a room. What is the chance that two of them celebrate their birthday on the same date? One in two.
When it comes to thinking about the world around you, you have a range of tools available. Intuitions are valuable for all kinds of things, especially in the social domain: deciding if your girlfriend is cheating on you, perhaps, or whether a business partner is trustworthy. But for mathematical issues, or assessing causal relationships, intuitions are often completely wrong, because they rely on shortcuts which have arisen as handy ways to solve complex cognitive problems rapidly, but at a cost of inaccuracies, misfires and oversensitivity.
It’s not safe to let our intuitions and prejudices run unchecked and unexamined: it’s in our interest to challenge these flaws in intuitive reasoning wherever we can, and the methods of science and statistics grew up specifically in opposition to these flaws. Their thoughtful application is our best weapon against these pitfalls, and the challenge, perhaps, is to work out which tools to use where. Because trying to be ‘scientific’ about your relationship with your partner is as stupid as following your intuitions about causality.
Now let’s see how journalists deal with stats.
I’d be genuinely intrigued to know how long it takes to find someone who can tell you the difference between ‘median’, ‘mean’ and ‘mode’, from where you are sitting right now.
If it helps to make this feel a bit more plausible, bear in mind that you only need
Bad Stats