To an extent, when we discussed the subject earlier I relied on your good will, and on the likelihood that from your own experience you could agree that this explanation made sense. But it has been demonstrated in another ingeniously pared-down experiment, where all the variables were controlled, but people still saw a pattern, and causality, where there was none.

The subjects in the experiment played the role of a teacher trying to make a child arrive punctually at school for 8.30 a.m. They sat at a computer on which it appeared that each day, for fifteen consecutive days, the supposed child would arrive some time between 8.20 and 8.40; but unbeknownst to the subjects, the arrival times were entirely random, and predetermined before the experiment began. Nonetheless, the subjects were all allowed to use punishments for lateness, and rewards for punctuality, in whatever permutation they wished. When they were asked at the end to rate their strategy, 70 per cent concluded that reprimand was more effective than reward in producing punctuality from the child.

These subjects were convinced that their intervention had an effect on the punctuality of the child, despite the child’s arrival time being entirely random, and exemplifying nothing more than ‘regression to the mean’. By the same token, when homeopathy has been shown to elicit no more improvement than placebo, people are still convinced that it has a beneficial effect on their health.

To recap:

We see patterns where there is only random noise.

We see causal relationships where there are none.

These are two very good reasons to measure things formally. It’s bad news for intuition already. Can it get much worse?

The bias towards positive evidence

It is the peculiar and perpetual error of the human understanding to be more moved and excited by affirmatives than negatives.

Francis Bacon

It gets worse. It seems we have an innate tendency to seek out and overvalue evidence that confirms a given hypothesis. To try to remove this phenomenon from the controversial arena of CAM – or the MMR scare, which is where this is headed – we are lucky to have more pared-down experiments which illustrate the general point.

Imagine a table with four cards on it, marked ‘A’, ‘B’,‘2’and ‘3’. Each card has a letter on one side, and a number on the other. Your task is to determine whether all cards with a vowel on one side have an even number on the other. Which two cards would you turn over? Everybody chooses the ‘A’ card, obviously, but like many people – unless you really forced yourself to think hard about it – you would probably choose to turn over the ‘2’ card as well. That’s because these are the cards which would produce information consistent with the hypothesis you are supposed to be testing. But in fact, the cards you need to flip are the ‘A’ and the ‘3’, because finding a vowel on the back of the ‘2’ would tell you nothing about ‘all cards’, it would just confirm ‘some cards’, whereas finding a vowel on the back of ‘3’ would comprehensively disprove your hypothesis. This modest brainteaser demonstrates our tendency, in our unchecked intuitive reasoning style, to seek out information that confirms a hypothesis: and it demonstrates the phenomenon in a value-neutral situation.

This same bias in seeking out confirmatory information has been demonstrated in more sophisticated social psychology experiments. When trying to determine if someone is an ‘extrovert’, for example, many subjects will ask questions for which a positive answer would confirm the hypothesis (‘Do you like going to parties?’) rather than refute it.

We show a similar bias when we interrogate information from our own memory. In one experiment, subjects read a vignette about a woman who exemplified various introverted and extroverted behaviours, and were then divided into two groups. One group was asked to consider her suitability for a job as a librarian, while the other was asked to consider her suitability for a job as an estate agent. Both groups were asked to come up with examples of both her extroversion and her introversion. The group considering her for the librarian job recalled more examples of introverted behaviour, while the group considering her for a job selling real estate cited more examples of extroverted behaviour.

This tendency is dangerous, because if you only ask questions that confirm your hypothesis, you will be more likely to elicit information that confirms it, giving a spurious sense of confirmation. It also means – thinking more broadly – that the people who pose the questions already have a head start in popular discourse.

So we can add to our running list of cognitive illusions, biases and failings of intuition:

We overvalue confirmatory information for any given hypothesis.

We seek out confirmatory information for any given hypothesis.

Biased by our prior beliefs

[I] followed a golden rule, whenever a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones.

Charles Darwin

This is the reasoning flaw that everybody does know about, and even if it’s the least interesting cognitive illusion – because it’s an obvious one – it has been demonstrated in experiments which are so close to the bone that you may find them, as I do, quite unnerving.

The classic demonstration of people being biased by their prior beliefs comes from a study looking at beliefs about the death penalty. A large number of proponents and opponents of state executions were collected. They were all shown two pieces of evidence on the deterrent effect of capital punishment: one supporting a deterrent effect, the other providing evidence against it.

The evidence they were shown was as follows:

A comparison of murder rates in one US state before the death penalty was brought in, and after.

A comparison of murder rates in different states, some with, and some without, the death penalty.

But there was a very clever twist. The proponents and opponents of capital punishment were each further divided into two smaller groups. So, overall, half of the proponents and opponents of capital punishment had their opinion reinforced by before/after data, but challenged by state/state data, and vice versa.

Asked about the evidence, the subjects confidently uncovered flaws in the methods of the research that went against their pre-existing view, but downplayed the flaws in the research that supported their view. Half the proponents of capital punishment, for example, picked holes in the idea of state/state comparison data, on methodological grounds, because that was the data that went against their view, while they were happy with the before/after data; but the other half of the proponents of capital punishment rubbished the before/after data, because in their case they had been exposed to before/after data which challenged their view, and state/state data which supported it.

Put simply, the subjects’ faith in research data was not predicated on an objective appraisal of the research methodology, but on whether the results validated their pre-existing views. This phenomenon reaches its pinnacle in alternative therapists – or scaremongers – who unquestioningly champion anecdotal data, whilst meticulously examining every large, carefully conducted study on the same subject for any small chink that would permit them to dismiss it entirely.

This, once again, is why it is so important that we have clear strategies available to us to appraise evidence, regardless of its conclusions, and this is the major strength of science. In a systematic review of the scientific literature, investigators will sometimes mark the quality of the ‘methods’ section of a study blindly – that is, without looking at the ‘results’ section – so that it cannot bias their appraisal. Similarly, in medical research there is a hierarchy of evidence: a well performed trial is more significant than survey data in most contexts, and so on.

So we can add to our list of new insights about the flaws in intuition:

Our assessment of the quality of new evidence is biased by our previous beliefs.

Availability

We spend our lives spotting patterns, and picking out the exceptional and interesting things. You don’t waste cognitive effort, every time you walk into your house, noticing and analysing all the many features in the visually dense environment of your kitchen. You do notice the broken window and the missing telly.

When information is made more ‘available’, as psychologists call it, it becomes disproportionately prominent.

Вы читаете Bad Science
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату