extensions, see Bhalla and Iyengar (1999). Resilence, Cohen et al. (2000), Barabasi and Bonabeau (2003), Barabasi (2002), and Banavar et al. (2000). Power laws and the Web, Adamic and Huberman (1999) and Adamic (1999). Statistics of the Internet: Huberman (2001), Willinger et al. (2004), and Faloutsos, Faloutsos, and Faloutsos (1999). For DNA, see Vogelstein et al. (2000).
Self-organized criticality: Bak (1996).
Pioneers of fat tails: for wealth, Pareto (1896), Yule (1925,1944). Less of a pioneer Zipf (1932, 1949). For linguistics, see Mandelbrot (1952).
Pareto: see Bouvier (1999).
Endogenous vs. exogenous: Sornette et al. (2004).
Sperber’s work: Sperber (1996a, 1996b, 1997).
Regression: if you hear the phrase
The notion of central limit: very misunderstood: it takes a long time to reach the central limit – so as we do not live in the asymptote, we’ve got problems. All various random variables (as we started in the example of Chapter 16 with a +1 or –1, which is called a Bernouilli draw) under summation (we did sum up the wins of the 40 tosses) be: come Gaussian. Summation is key here, since we are considering the results of adding up the 40 steps, which is where the Gaussian, under the first and second central assumptions becomes what is called a “distribution”. (A distribution tells you how you are likely to have your outcomes spread out, or distributed.) However, they may get there at different speeds. This is called the central limit theorem: if you add random variables coming from these individual tame jumps, it will lead to the Gaussian.
Where does the central limit not work? If you do not have these central assumptions, but have jumps of random size instead, then we would not get the Gaussian. Furthermore, we sometimes converge very slowly to the Gaussian. For preasymptotics and scalability, Mandelbrot and Taleb (2007a), Bouchaud and Potters (2003). For the problem of working outside asymptotes, Taleb (2007).
Aureas mediocritas: historical perspective, in Naya and Pouey-mounou (2005) aptly called
Reification (hypostatization): Lukacz, in Bewes (2002).
Catastrophes: Posner (2004).
Concentration and modern economic life: Zajdenweber (2000).
Choices of society structure and compressed outcomes: the classical paper is Rawls (1971), though Frohlich, Oppenheimer, and Eavy (1987a, 1987b), as well as Lissowski, Tyszka, and Okrasa (1991), contradict the notion of the desirability of Rawl’s veil (though by experiment). People prefer maximum average income subjected to a floor constraint on some form of equality for the poor, inequality for the rich type of environment.
Gaussian contagion: Quetelet in Stigler (1986). Francis Galton (as quoted in Ian Hacking’s
“Finite variance” nonsense: associated with clt is an assumption called “finite variance” that is rather technical: none of these building-block steps can take an infinite value if you square them or multiply them by themselves. They need to be bounded at some number. We simplified here by making them all one single step, or finite standard deviation. But the problem is that some fractal payoffs may have finite variance, but still not take us there rapidly. See Bouchaud and Potters (2003).
Lognormal: there is an intermediate variety that is called the lognormal, emphasized by one Gibrat (see Sutton [1997]) early in the twentieth century as an attempt to explain the distribution of wealth. In this framework, it is not quite that the wealthy get wealthier, in a pure preferential attachment situation, but that if your wealth is at 100 you will vary by 1, but when your wealth is at 1,000, you will vary by 10. The relative changes in your wealth are Gaussian. So the lognormal superficially resembles the fractal, in the sense that it may tolerate some large deviations, but it is dangerous because these rapidly taper off at the end. The introduction of the lognormal was a very bad compromise, but a way to conceal the flaws of the Gaussian.
Extinctions: Sterelny (2001). For extinctions from abrupt fractures, see Courtillot (1995) and Courtillot and Gaudemer (1996). Jumps: Eldredge and Gould.
Problem of “how large”: now the problem that is usually misunderstood. This scalability might stop somewhere, but I do not know where, so I might consider it infinite. The statements
Log

FIGURE 15: Typical distribution with power-law tails (here a student

FIGURE 16: The two exhaustive domains of attraction: vertical or straight line with slopes either negative infinity or constant negative
My ideas are made very simple with this clean-cut polarization –added to the problem of not knowing which basin we are in owing to the scarcity of data in the far right.
Fractals and power laws: Mandelbrot (1975, 1982), Schroeder (1991) is imperative. John Chipman’s unpublished manuscript
“To come very near true theory and to grasp its precise application are two very different things as the history of science teaches us. Everything of importance has been said before by somebody who did not discover it”.
Fractals in poetry: for the quote on Dickinson, see Fulton (1998).
Lacunarity: Brockman (2005). In the arts, Mandelbrot (1982).
Fractals in medicine: “new tool to diagnose and treat breast Cancer”,
General reference books in statistical physics: the most complete (in relation to fat tails) is Sornette (2004). See also Voit (2001) or the far deeper Bouchaud and Potters (2002) for financial prices and econophysics. For “complexity” theory, technical books: Bocarra (2004), Strogatz (1994), the popular Ruelle (1991), and also Prigogine (1996).
Fitting processes: for the philosophy of the problem, Taleb and Pilpel (2004). See also Pisarenko and Sornette (2004), Sornette et al. (2004), and Sornette and Ide (2001).
Poisson jump: sometimes people propose a Gaussian distribution with a small probability