wife in the 1990s) was not a criminal. Look, the other day I had breakfast with him and he didn’t kill anybody. I am serious, I did not see him kill a single person. Wouldn’t that confirm his innocence? If I said such a thing you would certainly call a shrink, an ambulance, or perhaps even the police, since you might think that I spent too much time in trading rooms or in cafes thinking about this Black Swan topic, and that my logic may represent such an immediate danger to society that I myself need to be locked up immediately.

You would have the same reaction if I told you that I took a nap the other day on the railroad track in New Rochelle, New York, and was not killed. Hey, look at me, I am alive, I would say, and that is evidence that lying on train tracks is risk-free. Yet consider the following. Look again at Figure 1 in Chapter 4; someone who observed the turkey’s first thousand days (but not the shock of the thousand and first) would tell you, and rightly so, that there is no evidence of the possibility of large events, i.e., Black Swans. You are likely to confuse that statement, however, particularly if you do not pay close attention, with the statement that there is evidence of no possible Black Swans. Even though it is in fact vast, the logical distance between the two assertions will seem very narrow in your mind, so that one can be easily substituted for the other. Ten days from now, if you manage to remember the first statement at all, you will be likely to retain the second, inaccurate version – that there is proof of no Black Swans. I call this confusion the round-trip fallacy, since these statements are not interchangeable.

Such confusion of the two statements partakes of a trivial, very trivial (but crucial), logical error – but we are not immune to trivial, logical errors, nor are professors and thinkers particularly immune to them (complicated equations do not tend to cohabit happily with clarity of mind). Unless we concentrate very hard, we are likely to unwittingly simplify the problem because our minds routinely do so without our knowing it.

It is worth a deeper examination here.

Many people confuse the statement “almost all terrorists are Moslems” with “almost all Moslems are terrorists”. Assume that the first statement is true, that 99 percent of terrorists are Moslems. This would mean that only about .001 percent of Moslems are terrorists, since there are more than one billion Moslems and only, say, ten thousand terrorists, one in a hundred thousand. So the logical mistake makes you (unconsciously) overestimate the odds of a randomly drawn individual Moslem person (between the age of, say, fifteen and fifty) being a terrorist by close to fifty thousand times!

The reader might see in this round-trip fallacy the unfairness of stereotypes – minorities in urban areas in the United States have suffered from the same confusion: even if most criminals come from their ethnic subgroup, most of their ethnic subgroup are not criminals, but they still suffer from discrimination by people who should know better.

“I never meant to say that the Conservatives are generally stupid. I meant to say that stupid people are generally Conservative”, John Stuart Mill once complained. This problem is chronic: if you tell people that the key to success is not always skills, they think that you are telling them that it is never skills, always luck.

Our inferential machinery, that which we use in daily life, is not made for a complicated environment in which a statement changes markedly when its wording is slightly modified. Consider that in a primitive environment there is no consequential difference between the statements most killers are wild animals and most wild animals are killers. There is an error here, but it is almost inconsequential. Our statistical intuitions have not evolved for a habitat in which these subtleties can make a big difference.

Zoogles Are Not All Boogies

All zoogles are boogies. You saw a boogie. Is it a zoogle? Not necessarily, since not all boogies are zoogles; adolescents who make a mistake in answering this kind of question on their SAT test might not make it to college. Yet another person can get very high scores on the SATs and still feel a chill of fear when someone from the wrong side of town steps into the elevator. This inability to automatically transfer knowledge and sophistication from one situation to another, or from theory to practice, is a quite disturbing attribute of human nature.

Let us call it the domain specificity of our reactions. By domain-specific I mean that our reactions, our mode of thinking, our intuitions, depend on the context in which the matter is presented, what evolutionary psychologists call the “domain” of the object or the event. The classroom is a domain; real life is another. We react to a piece of information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social-emotional system. Logical problems approached one way in the classroom might be treated differently in daily life. Indeed they are treated differently in daily life.

Knowledge, even when it is exact, does not often lead to appropriate actions because we tend to forget what we know, or forget how to process it properly if we do not pay attention, even when we are experts. Statisticians, it has been shown, tend to leave their brains in the classroom and engage in the most trivial inferential errors once they are let out on the streets. In 1971, the psychologists Danny Kahneman and Amos Tversky plied professors of statistics with statistical questions not phrased as statistical questions. One was similar to the following (changing the example for clarity): Assume that you live in a town with two hospitals – one large, the other small. On a given day 60 percent of those born in one of the two hospitals are boys. Which hospital is it likely to be? Many statisticians made the equivalent of the mistake (during a casual conversation) of choosing the larger hospital, when in fact the very basis of statistics is that large samples are more stable and should fluctuate less from the long-term average – here, 50 percent for each of the sexes – than smaller samples. These statisticians would have flunked their own exams. During my days as a quant I counted hundreds of such severe inferential mistakes made by statisticians who forgot that they were statisticians.

For another illustration of the way we can be ludicrously domain-specific in daily life, go to the luxury Reebok Sports Club in New York City, and look at the number of people who, after riding the escalator for a couple of floors, head directly to the StairMasters.

This domain specificity of our inferences and reactions works both ways: some problems we can understand in their applications but not in textbooks; others we are better at capturing in the textbook than in the practical application. People can manage to effortlessly solve a problem in a social situation but struggle when it is presented as an abstract logical problem. We tend to use different mental machinery – so-called modules – in different situations: our brain lacks a central all-purpose computer that starts with logical rules and applies them equally to all possible situations.

And as I’ve said, we can commit a logical mistake in reality but not in the classroom. This asymmetry is best visible in cancer detection. Take doctors examining a patient for signs of cancer; tests are typically done on patients who want to know if they are cured or if there is “recurrence”. (In fact, recurrence is a misnomer; it simply means that the treatment did not kill all the cancerous cells and that these undetected malignant cells have started to multiply out of control.) It is not feasible, in the present state of technology, to examine every single one of the patient’s cells to see if all of them are nonmalignant, so the doctor takes a sample by scanning the body with as much precision as possible. Then she makes an assumption about what she did not see. I was once taken aback when a doctor told me after a routine cancer checkup, “Stop worrying, we have evidence of cure”. “Why?” I asked. “There is evidence of no cancer” was the reply. “How do you know?” I asked. He replied, “The scan is negative”. Yet he went around calling himself doctor!

An acronym used in the medical literature is NED, which stands for No Evidence of Disease. There is no such thing as END, Evidence of No Disease. Yet my experience discussing this matter with plenty of doctors, even those who publish papers on their results, is that many slip into the round-trip fallacy during conversation.

Doctors in the midst of the scientific arrogance of the 1960s looked down at mothers’ milk as something primitive, as if it could be replicated by their laboratories – not realizing that mothers’ milk might include useful components that could have eluded their scientific understanding – a simple confusion of absence of evidence of the benefits of mothers’ milk with evidence of absence of the benefits (another case of Platonicity as “it did not make sense” to breast-feed when we could simply use bottles). Many people paid the price for this naive inference: those who were not breast-fed as infants turned out to be at an increased risk of a collection of health problems, including a higher likelihood of developing certain types of cancer – there had to be in mothers’ milk some necessary nutrients that still elude us. Furthermore, benefits to mothers who breast-feed were also neglected, such as a reduction in the risk of breast cancer.

Likewise with tonsils: the removal of tonsils may lead to a higher incidence of throat cancer, but for decades

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату