You might find that your drug works very well in Chinese women aged fifty-two to sixty-one. ‘Torture the data and it will confess to anything,’ as they say at Guantanamo Bay. Try every button on the computer
If you’re really desperate, and analysing your data the way you planned does not give you the result you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.
And when you’re finished, the most important thing, of course, is to publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written and edited entirely by the industry): remember, the tricks we have just described hide nothing, and will be obvious to anyone who reads your paper, but only if they read it very attentively, so it’s in your interest to make sure it isn’t read beyond the abstract. Finally, if your finding is really embarrassing, hide it away somewhere and cite ‘data on file’. Nobody will know the methods, and it will only be noticed if someone comes pestering you for the data to do a systematic review. Hopefully, that won’t be for ages.
How can this be possible?
When I explain this abuse of research to friends from outside medicine and academia, they are rightly amazed. ‘How can this be possible?’ they say. Well, firstly, much bad research comes down to incompetence. Many of the methodological errors described above can come about by wishful thinking, as much as mendacity. But is it possible to prove foul play?
On an individual level, it is sometimes quite hard to show that a trial has been deliberately rigged to give the right answer for its sponsors. Overall, however, the picture emerges very clearly. The issue has been studied so frequently that in 2003 a systematic review found thirty separate studies looking at whether funding in various groups of trials affected the findings. Overall, studies funded by a pharmaceutical company were found to be four times more likely to give results that were favourable to the company than independent studies.
One review of bias tells a particularly Alice in Wonderland story. Fifty-six different trials comparing painkillers like ibuprofen, diclofenac and so on were found. People often invent new versions of these drugs in the hope that they might have fewer side-effects, or be stronger (or stay in patent and make money). In every single trial the sponsoring manufacturer’s drug came out as better than, or equal to, the others in the trial. On not one occasion did the manufacturer’s drug come out worse. Philosophers and mathematicians talk about ‘transitivity’: if A is better than B, and B is better than C, then C cannot be better than A. To put it bluntly, this review of fifty-six trials exposed a singular absurdity: all of these drugs were better than each other.
But there is a surprise waiting around the corner. Astonishingly, when the methodological flaws in studies are examined, it seems that industry-funded trials actually turn out to have
How can we explain, then, the apparent fact that industry funded trials are so often so glowing? How can all the drugs simultaneously be better than all of the others? The crucial kludge may happen after the trial is finished.
Publication bias and suppressing negative results
‘Publication bias’ is a very interesting and very human phenomenon. For a number of reasons, positive trials are more likely to get published than negative ones. It’s easy enough to understand, if you put yourself in the shoes of the researcher. Firstly, when you get a negative result, it feels as if it’s all been a bit of a waste of time. It’s easy to convince yourself that you found nothing, when in fact you discovered a very useful piece of information: that the thing you were testing
Rightly or wrongly, finding out that something doesn’t work probably isn’t going to win you a Nobel Prize – there’s no justice in the world – so you might feel demotivated about the project, or prioritise other projects ahead of writing up and submitting your negative finding to an academic journal, and so the data just sits, rotting, in your bottom drawer. Months pass. You get a new grant. The guilt niggles occasionally, but Monday’s your day in clinic, so Tuesday’s the beginning of the week really, and there’s the departmental meeting on Wednesday, so Thursday’s the only day you can get any proper work done, because Friday’s your teaching day, and before you know it, a year has passed, your supervisor retires, the new guy doesn’t even know the experiment ever happened, and the negative trial data is forgotten forever, unpublished. If you are smiling in recognition at this paragraph, then you are a very bad person.
Even if you do get around to writing up your negative finding, it’s hardly news. You’re probably not going to get it into a big-name journal, unless it was a massive trial on something everybody thought was really whizzbang until your negative trial came along and blew it out of the water, so as well as this being a good reason for you not bothering, it also means the whole process will be heinously delayed: it can take a year for some of the slacker journals to reject a paper. Every time you submit to a different journal you might have to re-format the references (hours of tedium). If you aim too high and get a few rejections, it could be years until your paper comes out, even if you are being diligent: that’s years of people not knowing about your study.
Publication bias is common, and in some fields it is more rife than in others. In 1995, only 1 per cent of all articles published in alternative medicine journals gave a negative result. The most recent figure is 5 per cent negative. This is very, very low, although to be fair, it could be worse. A review in 1998 looked at the entire canon of Chinese medical research, and found that not one single negative trial had ever been published. Not one. You can see why I use CAM as a simple teaching tool for evidence-based medicine.
Generally the influence of publication bias is more subtle, and you can get a hint that publication bias exists in a field by doing something very clever called a funnel plot. This requires, only briefly, that you pay attention.
If there are lots of trials on a subject, then quite by chance they will all give slightly different answers, but you would expect them all to cluster fairly equally around the true answer. You would also expect that the bigger studies, with more participants in them, and with better methods, would be more closely clustered around the correct answer than the smaller studies: the smaller studies, meanwhile, will be all over the shop, unusually positive and negative at random, because in a study with, say, twenty patients, you only need three freak results to send the overall conclusions right off.
A funnel plot is a clever way of graphing this. You put the effect (i.e., how effective the treatment is) on the x-axis, from left to right. Then, on the y-axis (top-to-bottom, maths-skivers) you put how big the trial was, or some other measure of how accurate it was. If there is no publication bias, you should see a nice inverted funnel: the big, accurate trials all cluster around each other at the top of the funnel, and then as you go down the funnel, the little, inaccurate trials gradually spread out to the left and right, as they become more and more wildly inaccurate – both positively and negatively.
If there is publication bias, however, the results will be skewed. The smaller, more rubbish
The most heinous recent case of publication bias has been in the area of SSRI antidepressant drugs, as has been shown in various papers. A group of academics published a paper in the