New study examines last 70 years of animal research: a bleak picture

Source: Flickr/ Steve Jurvetson

Source: Flickr/ Steve Jurvetson

Let’s assume for a moment that animal models are good predictors of human biology, that animals can be used to predict human responses to drugs and other chemicals (it is a contentious issue). To achieve useful results that help us progress our knowledge of human biology and the development of new drugs and treatments for human diseases, we’d then need many animal studies of good quality.

We don’t know exactly how many animals are used in research. In Australia alone, the estimate for 2013 is 6.7 million animals. Worldwide, it is estimated that 115 million animals are used in laboratory experiments every year. Every week around 3,500 new pieces of research involving animals are published.

But what about the quality of the research?

Over the years, numerous researchers have pointed to various flaws in animal research (for example, here, here, here, here, here). This week, a systematic review of animal studies from the last 70 years was published in PLOS Biology.

Malcolm Macleod, Professor of Neurology and Translational Neuroscience at the University of Edinburgh, and 18 colleagues undertook the review of life sciences publications, examining the robustness of published research that involves animals. The authors checked the studies for the reporting of measures to reduce the risk of bias:

These approaches include random allocation of animals to an experimental group (to reduce confounding), blinded assessment of outcome measures (to reduce detection bias), a statement of sample size calculation (to provide reassurance that studies were adequately powered and that repeated testing of accumulating data was not performed), and reporting of animals excluded from the analysis (to guard against attrition bias and the ad hoc exclusion of data). Investigator conflict of interest might increase or decrease the risk of bias, and a statement of whether or not a conflict of interest exists may help the reader to judge whether this may have occurred. (p. 2)

Other types of bias, such as publication bias, were not examined.

Publications indexed in PubMED

From a random sample of 2,000 publications indexed in PubMED, the authors

ascertained the reporting of randomisation where this would be appropriate, of the blinded assessment of outcome, of a sample size calculation, and of whether the authors had a potential conflict of interest. (p. 3).

In other words, in a study of good quality one would expect the reporting of these four measures to avoid bias: randomisation (where appropriate), blinded assessment of outcome, reporting of how the sample size was calculated, and a conflict of interest statement.

This is what they found. Only:

  • 20% reported randomisation (where appropriate)
  • 3% reported blinded assessment of outcome
  • none reported sample size calculation
  • 10% reported a conflict of interest statement

Systematic reviews

Next, the team examined the reporting of measures to reduce the risk of bias in publications identified in a nonrandom sample of systematic reviews of in vivo studies. These systematic reviews had been produced by the Collaborative Approach to Meta-Analysis and Review of Experimental Data from Animal Studies (CAMARADES). The authors were also interested in any association between rigour and journal impact factor, so they selected for this analysis those publications for which they could find a journal impact factor for the year of publication.

This is what they found. Only:

  • 8% reported randomisation (where appropriate)
  • 5% reported blinded assessment of outcome
  • 7% reported sample size calculation
  • 5% reported a conflict of interest statement

High impact journals

High impact journals (i.e. those whose articles are frequently cited) are considered to be highly influential in their fields. Is research published in those journals at a lower risk of bias?

To answer this question, Macleod and colleagues examined the relationship between journal impact factor and reporting of risks of bias.

This is what they found:

… , there was no relationship between journal impact factor and the number of risk-of-bias items reported … Only for a statement of a possible conflict of interest was reporting highest in the highest decile of impact factor, perhaps reflecting the editorial policies of such journals. (p. 6)

Animal studies from the UK’s top five universities

Finally, the authors examined more than 1,000 animal studies from the UK’s top five universities that were published in 2009 or 2010.

This is what they found. Only:

  • 4% reported randomisation (where appropriate)
  • 3% reported blinded assessment of outcome
  • 4% reported inclusion or exclusion criteria or both (a priori determination of rules for inclusion and exclusion of subjects and data is considered a core issue for study evaluation)
  • 4% reported sample size calculation

Conclusions

The authors drew the following conclusions:

Firstly, we show that reporting of measures to reduce the risk of bias in certain fields of research has increased over time, but there is still substantial room for improvement. Secondly, there appears to be little relationship between journal impact factor and reporting of risks of bias, consistent with previous claims that impact factor is a poor measure of research quality. Thirdly, risk of bias was prevalent in a random sample of publications describing in vivo research. Finally, we found that recent publications from institutions identified in the UK 2008 RAE as producing research of the highest standards were in fact at substantial risk of bias, with less than a third reporting even one of four measures that might have improved the validity of their work. Further, there were significant differences between institutions in the reporting of such measures.

It is sobering that of over 1,000 publications from leading UK institutions, over two-thirds did not report even one of four items considered critical to reducing the risk of bias, and only one publication reported all four measures. (pp. 9-10)

Source: Flickr/ Amber Kost

Source: Flickr/ Amber Kost

So even if we assume that animal research provides useful insights and leads to new drugs and treatments for humans, the work by Professor Macleod and his colleagues provides – yet again – a bleak picture of the state of animal research. Contrary to widely accepted standards, measures to prevent bias are only occasionally reported in published research. I call this shoddy research. It’s a waste of research funds. It results in needless suffering and death for millions of animals. It results in us humans missing out on new drugs and treatments. It needs to stop.

Animal experimentation needs to stop and be replaced with more reliable and more ethical methods. We owe this to all animals, human and non-human.

Source: Humane Research Australia

Source: Humane Research Australia

Source:

Macleod, M. R., Lawson McLean, A., Kyriakopoulou, A., Serghiou, S., de Wilde, A., Sherratt, N., et al. (2015). Risk of bias in reports of in vivo research: A focus for improvement. PLoS Biology, 13(10), e1002273. (the full article is available online and it is relatively easy to understand)

Further reading:

Science Media Center (2015) Expert reaction to new study examining robustness of animal-based research over the last 70 years.

The Guardian (19 April 2015) Scientists told to stop wasting animal lives.

Pacific Standard (14 October 2015) Animal research falls short on experimental procedures.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s