at the time, ‘but it is near impossible to make accurate longer-term forecasts’.[64]

To be clear, there are some very good researchers within the wider CDC, and the Ebola model was just one output from a large research community there. But it does illustrate the challenges of producing and communicating high profile outbreak analysis. One problem with flawed predictions is that they reinforce the idea that models aren’t particularly useful. If models produce incorrect forecasts, the argument goes, why should people pay attention to them?

We face a paradox when it comes to forecasting outbreaks. Although pessimistic weather forecasts won’t affect the size of a storm, outbreak predictions can influence the final number of cases. If a model suggests the outbreak is a genuine threat, it may trigger a major response from health agencies. And if this brings the outbreak under control, it means the original forecast will be wrong. It’s therefore easy to confuse a useless forecast (i.e. one that would never have happened) with a useful one, which would have happened had agencies not intervened. Similar situations can occur in other fields. In the run up to the year 2000, governments and companies spent hundreds of billions of dollars globally to counter the ‘Millennium bug’. Originally a feature to save storage in early computers by abbreviating dates, the bug had propagated through modern systems. Because of the efforts to fix the problem, the damage was limited in reality, which led many media outlets to complain that the risk had been overhyped.[65]

Strictly speaking, the CDC Ebola estimate avoided this problem because it wasn’t actually a forecast; it was one of several scenarios. Whereas a forecast describes what we think will happen in the future, a scenario shows what could happen under a specific set of assumptions. The estimate of 1.4 million cases assumed the epidemic would continue to grow at the exact same rate. If disease control measures were included in the model, it predicted far fewer cases. But once numbers are picked up, they can stick in the memory, fueling scepticism about the kinds of models that created them. ‘Remember the 1 million Ebola cases predicted by CDC in fall 2014,’ tweeted Joanne Liu, International President of Médecins Sans Frontières (MSF), in response to a 2018 article about forecasting.[66] ‘Modeling has also limits.’

Even if the 1.4 million estimate was just a scenario, it still implied a baseline: if nothing had changed, that is what would have happened. During the 2013–2016 epidemic, almost 30,000 cases of Ebola were reported across Liberia, Sierra Leone and Guinea. Did the introduction of control measures by Western health agencies really prevent over 1.3 million cases?[67]

In the field of public health, people often refer to disease control measures as ‘removing the pumphandle.’ It’s a nod to John Snow’s work on cholera, and the removal of the handle on the Broad Street pump. There’s just one problem with this phrase: when the pumphandle came off on 8 September 1854, London’s cholera outbreak was already well in decline. Most of the people at risk had either caught the infection already, or fled the area. If we’re being accurate, ‘removing the pumphandle’ should really refer to a control measure that’s useful in theory, but delivered too late.

Soho cholera outbreak, 1854

By the time some of the largest Ebola treatment centres opened in late 2014, the outbreak was already slowing down, if not declining altogether.[68] Yet in some areas, control measures did coincide with a fall in cases. It’s therefore tricky to untangle the exact impact of these measures. Response teams often introduced several measures at once, from tracing infected contacts and encouraging changes in behaviour to opening treatment centres and conducting safe burials. What effect did international efforts actually have?

Using a mathematical model of Ebola transmission, our group estimated that the introduction of additional treatment beds – which isolated cases from the community and thereby reduced transmission – prevented around 60,000 Ebola cases in Sierra Leone between September 2014 and February 2015. In some districts, we found that the expansion of treatment centres could explain the entire outbreak decline; in other areas, there was evidence of an additional reduction in transmission in the community. This could have reflected other local and international control efforts, or perhaps changes in behaviour that were occurring anyway.[69]

Historical Ebola outbreaks have shown how important behaviour changes can be for outbreak control. When the first reported outbreak of Ebola started in the village of Yambuku, Zaire (now the Democratic Republic of the Congo) in 1976, the infection sparked in a small local hospital before spreading to the community. Based on archive data from the original outbreak investigation, my colleagues and I estimated that the transmission rate in the community declined sharply a few weeks into the outbreak.[70] Much of the decline came before the hospital closed and before the international teams arrived. ‘The communities where the outbreak continued to spread developed their own form of social distancing,’ recalled epidemiologist David Heymann, who was part of the investigation.[71] Without doubt, the international response to Ebola in late 2014 and early 2015 helped prevent cases in West Africa. But at the same time, foreign organisations should be cautious about claiming too much credit for the decline of such outbreaks.

Despite the challenges involved in producing forecasts, there is a large demand for them. Whether we’re looking at the spread of infectious diseases or crime, governments and other organisations need evidence to base their future policies on. So how can we improve outbreak forecasts?

Generally, we can trace problems with a forecast back to either the model itself or the data that goes into it. A good rule of thumb is that a mathematical model should be designed around the data available. If we don’t have data about the different transmission routes, for example, we should instead try to make simple but plausible assumptions about the overall spread. As well as making models easier to interpret, this approach also makes it easier to communicate what is unknown. Rather than grappling

Вы читаете The Rules of Contagion
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату