drive your new car out of the showroom, it will become dull. If you had expected this, you probably would not have bought it.
You are about to commit a prediction error that you have already made. Yet it would cost so little to introspect!
Psychologists have studied this kind of misprediction with respect to both pleasant and unpleasant events. We overestimate the effects of both kinds of future events on our lives. We seem to be in a psychological predicament that makes us do so. This predicament is called “anticipated utility” by Danny Kahneman and “affective forecasting” by Dan Gilbert. The point is not so much that we tend to mispredict our future happiness, but rather that we do not learn recursively from past experiences. We have evidence of a mental block and distortions in the way we fail to learn from our past errors in projecting the future of our affective states.
We grossly overestimate the length of the effect of misfortune on our lives. You think that the loss of your fortune or current position will be devastating, but you are probably wrong. More likely, you will adapt to anything, as you probably did after past misfortunes. You may feel a sting, but it will not be as bad as you expect. This kind of misprediction may have a purpose: to motivate us to perform
If you are in the business of being a seer, describing the future to other less-privileged mortals, you are judged on the merits of your predictions.
Helenus, in
Our problem is not just that we do not know the future, we do not know much of the past either. We badly need someone like Helenus if we are to know history. Let us see how.
Consider the following thought experiment borrowed from my friends Aaron Brown and Paul Wilmott:
The second operation is harder. Helenus indeed had to have skills.
The difference between these two processes resides in the following. If you have the right models (and some time on your hands, and nothing better to do) you can predict with great precision how the ice cube will melt – this is a specific engineering problem devoid of complexity, easier than the one involving billiard balls. However, from the pool of water you can build infinite possible ice cubes, if there was in fact an ice cube there at all. The first direction, from the ice cube to the puddle, is called the
In a way, the limitations that prevent us from unfrying an egg also prevent us from reverse engineering history.
Now, let me increase the complexity of the forward-backward problem just a bit by assuming nonlinearity. Take what is generally called the “butterfly in India” paradigm from the discussion of Lorenz’s discovery in the previous chapter. As we have seen, a small input in a complex system can lead to nonrandom large results, depending on very special conditions. A single butterfly flapping its wings in New Delhi may be the certain
Confusion between the two is disastrously widespread in common culture. This “butterfly in India” metaphor has fooled at least one filmmaker. For instance,
Take a personal computer. You can use a spreadsheet program to generate a random sequence, a succession of points we can call a history. How? The computer program responds to a very complicated equation of a nonlinear nature that produces numbers that seem random. The equation is very simple: if you know it, you can predict the sequence. It is almost impossible, however, for a human being to reverse engineer the equation and predict further sequences. I am talking about a simple one-line computer program (called the “tent map”) generating a handful of data points, not about the billions of simultaneous events that constitute the real history of the world. In other words, even if history were a nonrandom series generated by some “equation of the world”, as long as reverse engineering such an equation does not seem within human possibility, it should be deemed random and not bear the name “deterministic chaos”. Historians should stay away from chaos theory and the difficulties of reverse engineering except to discuss general properties of the world and learn the limits of what they can’t know.
This brings me to a greater problem with the historian’s craft. I will state the fundamental problem of practice as follows: while in theory randomness is an intrinsic property, in practice, randomness is
Nonpractitioners of randomness do not understand the subtlety. Often, in conferences when they hear me talk about uncertainty and randomness, philosophers, and sometimes mathematicians, bug me about the least relevant point, namely whether the randomness I address is “true randomness” or “deterministic chaos” that masquerades as randomness. A true random system is in fact random and does not have predictable properties. A chaotic system has entirely predictable properties, but they are hard to know. So my answer to them is dual.
a) There is no functional difference in practice between the two since we will never get to make the distinction – the difference is mathematical, not practical. If I see a pregnant woman, the sex of her child is a purely random matter to me (a 50 percent chance for either sex) – but not to her doctor, who might have done an ultrasound. In practice, randomness is fundamentally incomplete information.
b) The mere fact that a person is talking about the difference implies that he has never made a meaningful decision under uncertainty – which is why he does not realize that they are indistinguishable in practice.
Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.
One final word on history.
History is like a museum where one can go to see the repository of the past, and taste the charm of olden days. It is a wonderful mirror in which we can see our own narratives. You can even track the past using DNA