This is a paraphrase, but it’s not entirely inaccurate.
In some ways, perhaps it shouldn’t have been a surprise. The Germans had identified a rise in lung cancer in the 1920s, but suggested – quite reasonably – that it might be related to poison-gas exposure in the Great War. In the 1930s, identifying toxic threats in the environment became an important feature of the Nazi project to build a master race through ‘racial hygiene’.
Two researchers, Schairer and Schoniger, published their own case-control study in 1943, demonstrating a relationship between smoking and lung cancer almost a decade before any researchers elsewhere. Their paper wasn’t mentioned in the classic Doll and Bradford Hill paper of 1950, and if you check in the Science Citation Index, it was referred to only four times in the 1960s, once in the 1970s, and then not again until 1988, despite providing valuable information. Some might argue that this shows the danger of dismissing sources you dislike. But Nazi scientific and medical research was bound up with the horrors of cold-blooded mass murder, and the strange puritanical ideologies of Nazism. It was almost universally disregarded, and with good reason. Doctors had been active participants in the Nazi project, and joined Hitler’s National Socialist Party in greater numbers than any other profession (45 per cent of them were party members, compared with 20 per cent of teachers).
German scientists involved in the smoking project included racial theorists, but also researchers interested in the heritability of frailties created by tobacco, and the question of whether people could be rendered ‘degenerate’ by their environment. Research on smoking was directed by Karl Astel, who helped to organise the ‘euthanasia’ operation that murdered 200,000 mentally and physically disabled people, and assisted in the ‘final solution of the Jewish question’ as head of the Office of Racial Affairs.
I cheerfully admit to borrowing these examples from fabulous Professor Lewis Wolpert.
Why Clever People Believe Stupid Things
The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.
Robert Pirsig, Zen and the Art of Motorcycle Maintenance
Why do we have statistics, why do we measure things, and why do we count? If the scientific method has any authority – or as I prefer to think of it, ‘value’ – it is because it represents a systematic approach; but this is valuable only because the alternatives can be misleading. When we reason informally – call it intuition, if you like – we use rules of thumb which simplify problems for the sake of efficiency. Many of these shortcuts have been well characterised in a field called ‘heuristics’, and they are efficient ways of knowing in many circumstances.
This convenience comes at a cost – false beliefs – because there are systematic vulnerabilities in these truth-checking strategies which can be exploited. This is not dissimilar to the way that paintings can exploit shortcuts in our perceptual system: as objects become more distant, they appear smaller, and ‘perspective’ can trick us into seeing three dimensions where there are only two, by taking advantage of this strategy used by our depth-checking apparatus. When our cognitive system – our truth-checking apparatus – is fooled, then, much like seeing depth in a flat painting, we come to erroneous conclusions about abstract things. We might misidentify normal fluctuations as meaningful patterns, for example, or ascribe causality where in fact there is none.
These are cognitive illusions, a parallel to optical illusions. They can be just as mind-boggling, and they cut to the core of why we do science, rather than basing our beliefs on intuition informed by a ‘gist’ of a subject acquired through popular media: because the world does not provide you with neatly tabulated data on interventions and outcomes. Instead it gives you random, piecemeal data in dribs and drabs over time, and trying to construct a broad understanding of the world from a memory of your own experiences would be like looking at the ceiling of the Sistine Chapel through a long, thin cardboard tube: you can try to remember the individual portions you’ve spotted here and there, but without a system and a model, you’re never going to appreciate the whole picture.
Let’s begin.
Randomness
As human beings, we have an innate ability to make something out of nothing. We see shapes in the clouds, and a man in the moon; gamblers are convinced that they have ‘runs of luck’; we take a perfectly cheerful heavy- metal record, play it backwards, and hear hidden messages about Satan. Our ability to spot patterns is what allows us to make sense of the world; but sometimes, in our eagerness, we are oversensitive, trigger-happy, and mistakenly spot patterns where none exist.
In science, if you want to study a phenomenon, it is sometimes useful to reduce it to its simplest and most controlled form. There is a prevalent belief among sporting types that sportsmen, like gamblers (except more plausibly), have ‘runs of luck’. People ascribe this to confidence, ‘getting your eye in’, ‘warming up’, or more, and while it might exist in some games, statisticians have looked in various places where people have claimed it to exist and found no relationship between, say, hitting a home run in one inning, then hitting a home run in the next.
Because the ‘winning streak’ is such a prevalent belief, it is an excellent model for looking at how we perceive random sequences of events. This was used by an American social psychologist called Thomas Gilovich in a classic experiment. He took basketball fans and showed them a random sequence of X’s and O’s, explaining that they represented a player’s hits and misses, and then asked them if they thought the sequences demonstrated ‘streak shooting’.
Here is a random sequence of figures from that experiment. You might think of it as being generated by a series of coin tosses.
OXXXOXXXOXXOOOXOOXXOO
The subjects in the experiment were convinced that this sequence exemplified ‘streak shooting’ or ‘runs of luck’, and it’s easy to see why, if you look again: six of the first eight shots were hits. No, wait: eight of the first eleven shots were hits. No way is that random …
What this ingenious experiment shows is how bad we are at correctly identifying random sequences. We are wrong about what they should look like: we expect too much alternation, so truly random sequences seem somehow too lumpy and ordered. Our intuitions about the most basic observation of all – distinguishing a pattern from mere random background noise – are deeply flawed.
This is our first lesson in the importance of using statistics instead of intuition. It’s also an excellent demonstration of how strong the parallels are between these cognitive illusions and the perceptual illusions with which we are more familiar. You can stare at a visual illusion all you like, talk or think about it, but it will still look ‘wrong’. Similarly, you can look at that random sequence above as hard as you like: it will still look lumpy and ordered, in defiance of what you now know.
Regression to the mean
We have already looked at regression to the mean in our section on homeopathy: it is the phenomenon whereby, when things are at their extremes, they are likely to settle back down to the middle, or ‘regress to the mean’.
We saw this with reference to the
There are two discrete things happening when we fall prey to this failure of intuition. Firstly, we have failed to correctly spot the pattern of regression to the mean. Secondly, crucially, we have then decided that something must have