Shortly after, at least on philosophical time scales, the Food and Drug Administration (FDA) withdrew Vioxx from the US-market in 2004; see Horton 2004, Jüni et al. 2004 and Krumholz et al. 2007, Xavier Carné & Núria Cruz published a paper titled “Ten lessons to be learned from the withdrawal of Vioxx” in 2005 in the European Journal of Epidemiology.
The first two lessons they drew were [page 128]:
Lesson 1: “Our methodological tools to measure drugs benefit/risk ratios with precision are imperfect. RCT [randomised control trials] and observational studies are still substantially influenced by biases coming from the author’s affiliation.”
Lesson 2 “ A publicly available RCT registry is urgently needed. It is the only source that can provide the public, clinicians, scientists, and policy-makers with accurate and comprehensive on line information about all the studies that are being conducted.”
Ten years later, sadly, not much has changed. Trials are still sponsored and conducted by pharmaceutical companies. The then “urgently needed” registry is, today, still urgently needed.
But even such a registry existed not all would be well. A study published in 2014 found that [page 47] “For example, studies of published trial reports showed that the poor description of interventions meant that 40–89% were non-replicable.”
One further persistent problem is the small number of patients in drug trials; a 2005 study found “Parallel-group trials recruited a median of 80 participants overall, with 32 per treatment group” see also this 2013 study in neuroscience. A study published in 1980 found that “The median size of the 2000 or so cancer trials currently in progress is only about 50 patients, which is absurdly small.”
Of course, recruiting patients for drug trials is fraught with ethical problems and the very practical problem of finding people willing to become “guinea pigs”. However, small number of patients mean that reported results in a single study carry little weight. Hence, there is the need to combine results from different studies.
In order to make sensible decisions regarding the use and safety of drugs one would hope that the decision makers have the information available they require to make sensible decisions. Given the current state of affairs one is left to wonder how this is possible. Single studies are too small to allow sensible decision making. Combining different studies is fraught with problems to begin with, adding the problem of non-replicable studies makes the matters even worse. Finally, well-known and much-discussed publication biases mean that certain information is more likely to reach the decision makers.
No sane person can believe that making these decisions is easy. However, providing decision makers with the information they require does seem like a very sensible idea. I would hence like to echo the call for a publicly available study registry (to include other studies than RCTs as well as RCTs) and for more replicable medical research.