Often it is simply information that is missing, rather than some amazing new molecule. Eclampsia, say, is estimated to cause 50,000 deaths in pregnancy around the world each year, and the best treatment, by a huge margin, is cheap, unpatented, magnesium sulphate (high doses intravenously, that is, not some alternative medicine supplement, but also not the expensive anti-convulsants that were used for many decades). Although magnesium had been used to treat eclampsia since 1906, its position as the best treatment was only established a century later in 2002, with the help of the World Health Organisation, because there was no commercial interest in the research question: nobody has a patent on magnesium, and the majority of deaths from eclampsia are in the developing world. Millions of women have died of the condition since 1906, and many of those deaths were avoidable.
To an extent these are political and development issues, which we should leave for another day; and I have a promise to pay out on: you want to be able to take the skills you’ve learnt about levels of evidence and distortions of research, and understand how the pharmaceutical industry distorts data, and pulls the wool over our eyes. How would we go about proving this? Overall, it’s true, drug company trials are much more likely to produce a positive outcome for their own drug. But to leave it there would be weak-minded.
What I’m about to tell you is what I teach medical students and doctors – here and there – in a lecture I rather childishly call ‘drug company bullshit’. It is, in turn, what I was taught at medical school, and I think the easiest way to understand the issue is to put yourself in the shoes of a big pharma researcher.
You have a pill. It’s OK, maybe not that brilliant, but a lot of money is riding on it. You need a positive result, but your audience aren’t homeopaths, journalists or the public: they are doctors and academics, so they have been trained in spotting the obvious tricks, like ‘no blinding’, or ‘inadequate randomisation’. Your sleights of hand will have to be much more elegant, much more subtle, but every bit as powerful.
What can you do?
Well, firstly, you could study it in winners. Different people respond differently to drugs: old people on lots of medications are often no-hopers, whereas younger people with just one problem are more likely to show an improvement. So only study your drug in the latter group. This will make your research much less applicable to the actual people that doctors are prescribing for, but hopefully they won’t notice. This is so commonplace it is hardly worth giving an example.
Next up, you could compare your drug against a useless control. Many people would argue, for example, that you should
Then things get more interesting. If you do have to compare your drug with one produced by a competitor – to save face, or because a regulator demands it – you could try a sneaky underhand trick: use an inadequate dose of the competing drug, so that patients on it don’t do very well; or give a very high dose of the competing drug, so that patients experience lots of side-effects; or give the competing drug in the wrong way (perhaps orally when it should be intravenous, and hope most readers don’t notice); or you could increase the dose of the competing drug much too quickly, so that the patients taking it get worse side-effects. Your drug will shine by comparison.
You might think no such thing could ever happen. If you follow the references in the back, you will find studies where patients were given really rather high doses of old-fashioned antipsychotic medication (which made the new-generation drugs look as if they were better in terms of side-effects), and studies with doses of SSRI antidepressants which some might consider unusual, to name just a couple of examples. I know. It’s slightly incredible.
Of course, another trick you could pull with side-effects is simply not to ask about them; or rather – since you have to be sneaky in this field – you could be careful about how you ask. Here is an example. SSRI antidepressant drugs cause sexual side-effects fairly commonly, including anorgasmia. We should be clear (and I’m trying to phrase this as neutrally as possible): I
And yet, various studies have shown that the reported prevalence of anorgasmia in patients taking SSRI drugs varies between 2 per cent and 73 per cent, depending primarily on how you ask: a casual, open-ended question about side-effects, for example, or a careful and detailed enquiry. One 3,000-subject review on SSRIs simply did not list any sexual side-effects on its twenty-three-item side-effect table. Twenty-three other things were more important, according to the researchers, than losing the sensation of orgasm. I have read them. They are not.
But back to the main outcomes. And here is a good trick: instead of a real-world outcome, like death or pain, you could always use a ‘surrogate outcome’, which is easier to attain. If your drug is supposed to reduce cholesterol and so prevent cardiac deaths, for example, don’t measure cardiac deaths, measure reduced cholesterol instead. That’s much easier to achieve than a reduction in cardiac deaths, and the trial will be cheaper and quicker to do, so your result will be cheaper
Now you’ve done your trial, and despite your best efforts things have come out negative. What can you do? Well, if your trial has been good overall, but has thrown out a few negative results, you could try an old trick: don’t draw attention to the disappointing data by putting it on a graph. Mention it briefly in the text, and ignore it when drawing your conclusions. (I’m so good at this I scare myself. Comes from reading too many rubbish trials.)
If your results are completely negative, don’t publish them at all, or publish them only after a long delay. This is exactly what the drug companies did with the data on SSRI antidepressants: they hid the data suggesting they might be dangerous, and they buried the data showing them to perform no better than placebo. If you’re really clever, and have money to burn, then after you get disappointing data, you could do some more trials with the same protocol, in the hope that they will be positive: then try to bundle all the data up together, so that your negative data is swallowed up by some mediocre positive results.
Or you could get really serious, and start to manipulate the statistics. For two pages only, this book will now get quite nerdy. I understand if you want to skip it, but know that it is here for the doctors who bought the book to laugh at homeopaths. Here are the classic tricks to play in your statistical analysis to make sure your trial has a positive result. Ignore the protocol entirely
Always assume that any correlation
Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, then leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, then adjust for the baseline in your analysis. Ignore dropouts
People who drop out of trials are statistically much more likely to have done badly, and much more likely to have had side-effects. They will only make your drug look bad. So ignore them, make no attempt to chase them up, do not include them in your final analysis. Clean up the data
Look at your graphs. There will be some anomalous ‘outliers’, or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.
‘The best of five … no … seven … no … nine!’
If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months the results are ‘nearly significant’, extend the trial by another three months. Torture the data
If your results are bad, ask the computer to go back and see if any particular subgroups behaved differently.