In this section
Poor Study Design is Damaging Preclinical Research
An analysis conducted by Professor Malcolm Macleod and Dr Emily Sena of the University of Edinburgh has revealed that several key methods that are required to reduce bias in studies are not being widely used. The analysis, conducted on 2,671 published papers from between 1992 and 2011, explored work that had previously “been included in meta-analyses of experimental disease treatments,” according to Nature.
There are four methods that, according to Professor Macleod and Dr Sena, should be used in every drug discovery experiment. Blind assessment, where investigators do not know what the animals are being treated with, and randomising the assignment of animals to treatment or control are two of these. The other two are producing a conflict of interest statement (COI) and calculating the required sample size for a result that can be termed statistically robust. The importance of the latter is currently being strongly emphasised in the life sciences, as we discussed here.
According to the analysis, these methods have been used more in recent years – the figures from 2008-2011 paint a more encouraging picture than those from 1992-1998. For example, in more recent studies, COIs are produced almost 30% of the time, compared to less than 10% of the time in the past. However, all of these methods are being utilised less than 40% of the time even in recent studies. As Professor Macleod told journalists at a recent press conference, “nobody in science should not be doing this stuff.” These methods are crucial to unbiased, statistically viable experimentation and reporting.
According to Professor Macleod, even highly renowned scientific journals are publishing papers produced without the use of these methods. There is also no guarantee that study design will be better at well-reputed institutions: papers submitted by, among others, the University of Oxford and Imperial College London showed blinding and randomisation less than 20% of the time.
So what can be done to solve the problem? Journals like Nature are taking action by endorsing the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines developed as part of an initiative by the NC3Rs. These guidelines emphasise good reporting, and make a considerable contribution to better study design. Nature also has a policy that requires article and study authors to include experimental and analytical design information in their work, as well as requiring the completion of a checklist that verifies the information provided by using the four recommended methods is available.
It is clear that other journals need to follow Nature’s example, and that study designs need to be re-examined across the board. Tools like ActualHCA allow for more statistically robust sampling, but issues like the lack of randomisation still need to be addressed. It seems that a review of standards and the requirements for submission to journals is imminent.