Nobody has trouble with this form of logic; we understand the abstract form and realize that it generalizes freely:

All glorks are frum.

Skeezer is a glork.

Therefore, Skeezer is frum.

Presto — a new way for forming beliefs: take what you know (the minor and major premises), insert them into the inferential schema (all X’s are Y, Q is an X, therefore Q is a Y), and deduce new beliefs. The beauty of the scheme is the way in which true premises are guaranteed, by the rules of logic, to lead to true conclusions.

The good news is that humans can do this sort of thing at all; the bad news is that, without a lot of training, we don’t do it particularly well. If the capacity to reason logically is the product of natural selection, it is also a very recent adaptation with some serious bugs yet to be worked out.

Consider, for example, this syllogism, which has a slight but important difference from the previous one:

All living things need water.

Roses need water.

Therefore, roses are living things.

Is this a valid argument? Focus on the logic, not the conclusion per se; we already know that roses are living things. The question is whether the logic is sound, whether the conclusion follows the premises like the night follows the day. Most people think the argument is solid. But look carefully: the statement that all living things need water doesn’t preclude the possibility that some nonliving things might need water too. My car’s battery, for instance.

The poor logic of the argument becomes clearer if I simply change the words in question:

Premise 1: All insects need oxygen.

Premise 2: Mice need oxygen.

Conclusion: Therefore, mice are insects.

A creature truly noble in reason ought to see, instantaneously, that the rose and mouse arguments follow exactly the same formal structure (all X’s need Y, Z’s need Y, therefore Z’s are X’s) and ought to instantly reject all such reasoning as fallacious. But most of us need to see the two syllogisms side by side in order to get it. All too often we suspend a careful analysis of what is logical in favor of prior beliefs.

What’s going on here? In a system that was superlatively well engineered, belief and the process of drawing inferences (which soon become new beliefs) would be separate, with an iron wall between them; we would be able to distinguish what we had direct evidence for from what we had merely inferred. Instead, in the development of the human mind, evolution took a different path. Long before human beings began to engage in completely explicit, formal forms of logic (like syllogisms), creatures from fish to giraffes were probably making informal inferences, automatically, without a great deal of reflection; if apples are good to eat, pears probably are too. A monkey or a gorilla might make that inference without ever realizing that there is such a thing as an inference. Perhaps one reason people are so apt to confuse what they know with what they have merely inferred is that for our ancestors, the two were scarcely different, with much of inference arising automatically as part of belief, rather than via some separate, reflective system.

The capacity to codify the laws of logic — to recognize that if P then Q; P; therefore Q is valid whereas if Pf then Q; Q; therefore P is y not — presumably evolved only recently, perhaps sometime after the arrival of Homo sapiens. And by that time, belief and inference were already too richly intertwined to allow the two to ever be fully separate in everyday reasoning. The result is very much a kluge: a perfectly sound system of deliberate reasoning, all too often pointlessly clouded by prejudice and prior belief.

Studies of the brain bear this out: people evaluate syllogisms using two different neural circuits, one more closely associated with logic and spatial reasoning (bilateral parietal), the other more closely associated with prior belief (frontal-temporal). The former (logical and spatial) is effortful, the latter invoked automatically; getting the logic right is difficult.

In fact, truly explicit reasoning via logic probably isn’t something that evolved, per se, at all. When humans do manage to be rational, in a formal logical sense, it’s not because we are built that way, but because we are clever enough to learn the rules of logic (and to recognize their validity, once explained). While all normal human beings acquire language, the ability to use formal logic to acquire and reason about beliefs may be more of a cultural product than an evolutionary one, something made possible by evolution but not guaranteed by it. Formal reason seems to be present, if at all, primarily in literate cultures but difficult to discern in preliterate ones. The Russian psychologist Alexander Luria, for example, went to the mountains of central Asia in the late 1930s and asked the indigenous people to consider the logic of syllogisms like this one: “In a certain town in Siberia all bears are white. Your neighbor went to that town and he saw a bear. What color was that bear?” His respondents just didn’t get it; a typical response would be, in essence, “How should I know? Why doesn’t the professor go ask the neighbor himself?” Further studies later in the twentieth century essentially confirmed this pattern; people in nonliterate societies generally respond to queries about syllogisms by relying on the facts that they already know, apparently blind to the abstract logical relations that experimenters are inquiring about. This does not mean that people from those societies cannot learn formal logic — in general, at least the children can — but it does show that acquiring an abstract logic is not a natural, automatic phenomenon in the way that acquiring language is. This in turn suggests that formal tools for reasoning about belief are at least as much learned as they are evolved, not (as assumed by proponents of the idea that humanity is innately rational) standard equipment.

Once we decide something is true (for whatever reason), we often make up new reasons for believing it. Consider, for example, a study that I ran some years ago. Half my subjects read a report of a study that showed that good firefighting was correlated with high scores on a measure of risk-taking ability; the other half of the subjects read the opposite: they were told of a study that showed that good firefighting was negatively correlated with risk-taking ability, that is, that risk takers made poor firefighters. Each group was then further subdivided. Some people were asked to reflect on what they read, writing down reasons for why the study they read about might have gotten the results it did; others were simply kept busy with a series of difficult geometrical puzzles like those found on an IQ test.

Then, as social psychologists so often do, I pulled the rug out from under my subjects: “Headline, this news just in — the study you read about in the first part of the experiment was a fraud. The scientists who allegedly studied firefighting actually made their data up! What I’d like to know is what you really think — is firefighting really correlated with risk taking?”

Even after I told people that the original study was complete rubbish, people in the subgroups who got a chance to reflect (and create their own explanations) continued to believe whatever they had initially read. In short, if you give someone half a chance to make up their own reasons to believe something, they’ll take you up on the opportunity and start to believe it — even if their original evidence is thoroughly discredited. Rational man, if he (or she) existed, would only believe what is true, invariably moving from true premises to true conclusions. Irrational man, kluged product of evolution that he (or she) is, frequently moves in the opposite direction, starting with a conclusion and seeking reasons to believe it.

Belief, I would suggest, is stitched together out of three fundamental components: a capacity for memory (beliefs would be of no value if they came and went without any long-term hold on the mind), a capacity for inference (deriving new facts from old, as just discussed), and a capacity for, of all things, perception.

Superficially, one might think of perception and belief as separate. Perception is what we see and hear, taste, smell, or feel, while belief is what we know or think we know. But in terms of evolutionary history, the two are not as different as they initially appear. The surest path to belief is to see something. When my wife’s golden retriever, Ari, wags his tail, I believe him to be happy; mail falls through the slot, and I believe the mail has arrived. Or, as Chico Marx put it, “Who are you gonna believe, me or your own eyes?”

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×