I didn’t realise it at the time, but as I was leaving finance for a career in epidemiology, in another part of London the two fields were coming together. Over on Threadneedle Street, the Bank of England was battling to limit the fallout from Lehman’s collapse.[35] More than ever, it was clear that many had overestimated the stability of the financial network. Popular assumptions of robustness and resilience no longer held up; contagion was a much bigger problem than people had thought.
This is where the disease researchers came in. Building on that 2006 conference at the Federal Reserve, Robert May had started to discuss the problem with other scientists. One of them was Nim Arinaminpathy, a colleague at the University of Oxford. Arinaminpathy recalled that, pre-2007, it was unusual to study the financial system as a whole. ‘There was a lot of faith in the vast, complex financial system being self-correcting,’ he said. ‘The attitude was “we don’t need to know how the system works, instead we can concentrate on individual institutions”.’[36] Unfortunately, the events of 2008 would reveal the weakness in this approach. Surely there was a better way?
During the late 1990s, May had been Chief Scientist to the UK Government. As part of this role, he’d got to know Mervyn King, who would later become Governor of the Bank of England. When the 2008 crisis hit, May suggested they look at the issue of contagion in more detail. If a bank suffered a shock, how might it propagate through the financial system? May and his colleagues were well placed to tackle the problem. In the preceding decades, they had studied a range of infections – from measles to hiv – and developed new methods to guide disease control programmes. These ideas would eventually revolutionise central banks’ approach to financial contagion. However, to understand how these methods work, we first need to look at a more fundamental question: how do we work out whether an infection – or a crisis – will spread or not?
After william kermack and Anderson McKendrick announced their work on epidemic theory in the 1920s, the field took a sharp mathematical turn. Although people continued working on outbreak analysis, the work became more abstract and technical. Researchers like Alfred Lotka published lengthy, complicated papers, moving the field away from real-life epidemics. They found ways to study hypothetical outbreaks involving random events, intricate transmission processes and multiple populations. The emergence of computers helped drive these technical developments; models that were previously difficult to analyse by hand could now be simulated.[37]
Then progress stuttered. The obstacle was a 1957 textbook written by mathematician Norman Bailey. Continuing the theme of the preceding years, it was almost entirely theoretical, with hardly any real-life data. The textbook was an impressive survey of epidemic theory, which would help lure several young researchers into the field. But there was a problem: Bailey had left out a crucial idea, which would turn out to be one of the most important concepts in outbreak analysis.[38]
The idea in question had originated with George MacDonald, a malaria researcher based in the Ross Institute at the London School of Hygiene & Tropical Medicine. In the early 1950s, MacDonald had refined Ronald Ross’s mosquito model, making it possible to incorporate real-life data about things like mosquito lifespan and feeding rates. By tailoring the model to actual scenarios, MacDonald was able to spot which part of the transmission process was most vulnerable to control measures. Whereas Ross had focused on the mosquito larvae that lived in water, MacDonald realised that to tackle malaria, agencies would be better off targeting the adult mosquitoes. They were the weakest link in the chain of transmission.[39]
In 1955, the World Health Organization announced plans to eradicate a disease for the first time. Inspired by MacDonald’s analysis, they had chosen malaria. Eradication meant getting rid of all infections globally, something that would eventually prove harder to achieve than hoped; some mosquitoes became resistant to pesticides, and control measures targeting mosquitoes were less effective in some areas than others. As a result, who would later shift its focus to smallpox, eradicating the disease in 1980.[40]
MacDonald’s idea to target adult mosquitos had been a crucial piece of research, but it wasn’t the one that Bailey had omitted in his textbook. The truly groundbreaking idea had been nestled in the appendix of MacDonald’s paper.[41] Almost as an afterthought, he had proposed a new way of thinking about infections. Rather than looking at critical mosquito densities, he suggested thinking about what would happen if a single infectious person arrived in the population. How many more infections would follow?
Twenty years later, mathematician Klaus Dietz would finally pick up on the idea in MacDonald’s appendix. In doing so, he would help bring the theory of epidemics out of its mathematical niche and into the wider world of public health. Dietz outlined a quantity that would become known as the ‘reproduction number’, or R for short. R represented the number of new infections we’d expect a typical infectious person to generate on average.
In contrast to the rates and thresholds used by Kermack and McKendrick, R is a more intuitive – and general – way to think about contagion. It simply asks: how many people would we expect a case to pass the infection on to? As we shall see in later chapters, it’s an idea that we can apply to a wide range of outbreaks, from gun violence to online memes.
R is particularly useful because it