Avoid these common missteps in conducting your preclinical studies
Preclinical experiments using animal models are an essential part of the drug discovery process both for drug discovery and for regulatory preclinical development. The ability to translate preclinical data to clinical success is dependent on a number of factors, not least the reproducibility and robustness of the preclinical data that is generated. Below we explore the common and potential missteps when designing and executing preclinical studies and suggest how to overcome them.
How much data?
The numbers of animals to use for each group in a preclinical study is often dictated by several factors. However, these numbers are rarely decided by the performance of power calculations: a review of 3396 cardiovascular disease studies found that only 2.3% of the studies had any form of statistical analysis used to determine sample size. Consideration must be given to the 3Rs for ethical use of animals in testing and practicalities when considering the numbers as well as the difference in percentage you hope to observe in your experimental model.
Whilst studies with insufficient subjects are commonplace, we also need to consider the experimental design itself. Naturally, it is common for scientists to look at answering multiple questions within the same experiment. It is important in such situations to ensure that there are sufficient samples and appropriate experimental design to be able to answer all key questions. Overcomplicating a study design can cause issues in data interpretation and risk jeopardising achieving statistical significance (if indeed that is the aim). It might be tempting, for instance, to include a time series of sampling to look at a timescale of effect, rather than just to look to see if an effect occurred. Too much data can pose as many challenges as not enough data.
Randomisation and blinding
According to guidelines from the European Medicines Agency, randomisation and blinding are the most important techniques for avoiding bias when designing clinical trials. For some preclinical studies, these processes are also vitally important, particularly where one of the study outcomes has a subjective element. Randomisation avoids selection bias, while blinding helps to avoid performance and detection biases. Despite this, in a review of articles published in Cancer Research, only 28% of studies reported randomisation when allocating animals to treatment group. None disclosed the procedure, while only two reported blinding being used in the study at all. Of course, in most preclinical oncology studies, the endpoint is likely to be a non-subjective measurement like xenograft volume or colony count; consequently, one could argue that blinding is not as important as it would be in a cognition assessment or nociception study. The reporting of randomisation and blinding protocols is now a requirement for manuscripts involving animal work submitted to Nature journals. As such, transparent and rigorous protocols should be decided at the study initiation stage with the appropriate methods selected and recorded if the data is to be published. A recent study assessing the impact of the change in editorial policy at Nature found an increase in studies mentioning randomisation from 8% to 24%; however, only 11% describing the specific method used.
In addition to randomisation, blinding can reduce bias. Through blinding, subjects are allocated to treatment groups and group names and/or treatments are replaced with codes: names and treatments remain confidential until a predetermined timepoint. An experimenter that is not blinded to a treatment group may unknowingly influence the outcome of an experiment due to differences in care that the animals receive. These factors may then be mistaken for therapeutic effect in subsequent statistical analyses. As mentioned above, this is especially important if one of the recorded outcomes of the study has a subjective element, as it commonly happens in behavioural studies.
If one of the experimental endpoints in a preclinical study requires samples to be profiled (gene expression analysis is an example) it is imperative that the same due care and attention is given to that part of the study as is given to the in-life portion. This includes ensuring consistent sample storage and limiting the number of batches that the data is profiled in. It would be ideal to have only a single batch for all experimental samples. However, if this is not possible, it is recommended to include bridging samples from earlier batches. This is crucial for allowing robust analysis to be performed.
In summary, to ensure robust and reliable preclinical data it is important to consider the whole process: from experimental design through to sample profiling and the analysis beyond. Mapping out the processes and logistics of data collection as well as observing some simple rules from the very beginning will allow you to get the most out of your research.