different amounts of wrinkles.
To me this is not a great surprise, and it illustrates a very simple issue in epidemiological research called ‘confounding variables’: these are things which are related both to the outcome you’re measuring (wrinkles) and to the exposure you are measuring (food), but which you haven’t thought of yet. They can confuse an apparently causal relationship, and you have to think of ways to exclude or minimise confounding variables to get to the right answer, or at least be very wary that they are there. In the case of this study, there are almost too many confounding variables to describe.
I eat well – with lots of olive oil, as it happens – and I don’t have many wrinkles. I also have a middle-class background, plenty of money, an indoor job, and, if we discount infantile threats of litigation and violence from people who cannot tolerate any discussion of their ideas, a life largely free from strife. People with completely different lives will always have different diets, and different wrinkles. They will have different employment histories, different amounts of stress, different amounts of sun exposure, different levels of affluence, different levels of social support, different patterns of cosmetics use, and much more. I can imagine plenty of reasons why you might find that people who eat olive oil have fewer wrinkles; and the olive oil having a causative role, an actual physical effect on your skin when you eat it, is fairly low down on my list.
Now, to be fair to nutritionists, they are not alone in failing to understand the importance of confounding variables, in their eagerness for a clear story. Every time you read in a newspaper that ‘moderate alcohol intake’ is associated with some improved health outcome – less heart disease, less obesity, anything – to gales of delight from the alcohol industry, and of course from your friends, who say, ‘Ooh well, you see, it’s better for me to drink a little …’ as they drink a lot – you are almost certainly witnessing a journalist of limited intellect, overinterpreting a study with huge confounding variables.
This is because, let’s be honest here: teetotallers are abnormal. They’re not like everyone else. They will almost certainly have a reason for not drinking, and it might be moral, or cultural, or perhaps even medical, but there’s a serious risk that whatever is causing them to be teetotal might also have other effects on their health, confusing the relationship between their drinking habits and their health outcomes. Like what? Well, perhaps people from specific ethnic groups who are teetotal are also more likely to be obese, so they are less healthy. Perhaps people who deny themselves the indulgence of alcohol are more likely to indulge in chocolate and chips. Perhaps preexisting ill health will force you to give up alcohol, and that’s skewing the figures, making teetotallers look unhealthier than moderate drinkers. Perhaps these teetotallers are recovering alcoholics: among the people I know, they’re the ones who are most likely to be absolute teetotallers, and they’re also more likely to be fat, from all those years of heavy alcohol abuse. Perhaps some of the people who say they are teetotal are just lying.
This is why we are cautious about interpreting observational data, and to me, Dowden has extrapolated too far from the data, in her eagerness to dispense – with great authority and certainty –
If we were modern about this, and wanted to offer constructive criticism, what might she have written instead? I think, both here and elsewhere, that despite what journalists and self-appointed ‘experts’ might say, people are perfectly capable of understanding the evidence for a claim, and anyone who withholds, overstates or obscures that evidence, while implying that they’re doing the reader a favour, is probably up to no good. MMR is an excellent parallel example of where the bluster, the panic, the ‘concerned experts’ and the conspiracy theories of the media were very compelling, but the science itself was rarely explained.
So, leading by example, if I were a media nutritionist, I might say, if pushed, after giving all the other sensible sun advice: ‘A survey found that people who eat more olive oil have fewer wrinkles,’ and I might feel obliged to add, ‘Although people with different diets may differ in lots of other ways.’ But then, I’d also be writing about food, so: ‘Never mind, here’s a delicious recipe for salad dressing anyway.’ Nobody’s going to employ me to write a nutritionist column.
From the lab bench to the glossies
Nutritionists love to quote basic laboratory science research because it makes them look as if they are actively engaged in a process of complicated, impenetrable, highly technical academic work. But you have to be very cautious about how you extrapolate from what happens to some cells in a dish, on a laboratory bench, to the complex system of a living human being, where things can work in completely the opposite way to what laboratory work would suggest. Anything can kill cells in a test tube. Fairy Liquid will kill cells in a test tube, but you don’t take it to cure cancer. This is just another example of how nutritionism, despite the ‘alternative medicine’ rhetoric and phrases like ‘holistic’, is actually a crude, unsophisticated, old fashioned, and above all
Later we will see Patrick Holford, the founder of the Institute for Optimum Nutrition, stating that vitamin C is better than the AIDS drug AZT on the basis of an experiment where vitamin C was tipped onto some cells in a dish. Until then, here is an example from Michael van Straten – who has fallen sadly into our quadrat, and I don’t want to introduce too many new characters or confuse you – writing in the
Forty years ago a man called Austin Bradford-Hill, the grandfather of modern medical research, who was key in discovering the link between smoking and lung cancer, wrote out a set of guidelines, a kind of tick list, for assessing causality, and a relationship between an exposure and an outcome. These are the cornerstone of evidence-based medicine, and often worth having at the back of your mind: it needs to be a strong association, which is consistent, and specific to the thing you are studying, where the putative cause comes before the supposed effect in time; ideally there should be a biological gradient, such as a dose–response effect; it should be consistent, or at least not completely at odds with, what is already known (because extraordinary claims require extraordinary evidence); and it should be biologically plausible.
Michael van Straten, here, has got biological plausibility, and little else. Medics and academics are very wary of people making claims on such tenuous grounds, because it’s something you get a lot from people with something to sell: specifically, drug companies. The public don’t generally have to deal with drug-company propaganda, because the companies are not currently allowed to talk to patients in Europe – thankfully – but they badger doctors incessantly, and they use many of the same tricks as the miracle-cure industries. You’re taught about these tricks at medical school, which is how I’m able to teach you about them now.
Drug companies are very keen to promote theoretical advantages (‘It works more on the Z4 receptor, so it must have fewer side-effects!’), animal experiment data or ‘surrogate outcomes’ (‘It improves blood test results, it must be protective against heart attacks!’) as evidence of the efficacy or superiority of their product. Many of the more detailed popular nutritionist books, should you ever be lucky enough to read them, play this classic drug- company card very assertively. They will claim, for example, that a ‘placebo-controlled randomised control trial’ has shown
For example, the trial may merely have shown that there were measurably increased amounts of the vitamin in the bloodstream after taking a vitamin, compared to placebo, which is a pretty unspectacular finding in itself: yet this is presented to the unsuspecting lay reader as a positive trial. Or the trial may have shown that there were changes in some other blood marker, perhaps the level of an ill-understood immune-system component, which, again, the media nutritionist will present as concrete evidence of a real-world benefit.
There are problems with using such surrogate outcomes. They are often only tenuously associated with the real disease, in a very abstract theoretical model, and often developed in the very idealised world of an experimental animal, genetically inbred, kept under conditions of tight physiological control. A surrogate outcome can – of course – be used to generate and examine hypotheses about a real disease in a real person, but it needs to be very carefully validated. Does it show a clear dose–response relationship? Is it a true predictor of disease, or merely a ‘co-variable’, something that is related to the disease in a different way (e.g. caused