I have a feature out today in Nature on the retractions system. Over the past decade, the number of retraction notices has grown 10-fold, though it’s still a miniscule sliver (about 0.02%) compared to the research literature. This rise is sharpening focus on various problems with the system for retracting papers. Retractions even have their own blog now, Retraction Watch, which covered a good two-thirds of last year’s retractions in detail.
One question that’s often asked is whether we are changing our reasons for retraction. How many retractions are due to misconduct, and has this proportion increased in recent years?
The answer is frustrating: analyses can’t yet say for sure, although there are hints that calling out plagiarism is making an increasing contribution to the total. Only the barest headlines could be included in the feature, so this blog is an additional analysis for those who really want to know the details.
The only way to check the reasons behind retractions is to look in detail at retraction notices — and even then, the gnomic language used, perhaps for fear of lawsuits, often makes it impossible to determine exactly why a paper has been retracted. Still, those who have tried to do this — usually by looking at a sample of a few hundred papers — think that perhaps 45-55% of papers in the past decade were retracted for some sort of misconduct. (The key papers are E. Wager & P. Williams, J. Med. Ethics (2011); G. Steen, J. Med. Ethics (2010); and unpublished but comprehensive work from J. Budd, Z. Coble and K. Anderson (preprint here)). Earlier studies suggest that misconduct before the turn of the century played a smaller part in retractions, accounting for perhaps 30-40% (S. Nath et al. MJA, 185, 152; 2006; J. Budd et al. Bull Med Libr Assoc, 87, 437; 1999). But it’s not clear that any of the methods and classifications used in these studies are comparable. Here’s Wager’s findings charted:
The rise in retractions really kicked in at the start of this millennium. Unfortunately, there’s no clear answer on whether relative proportions of reasons for retraction have changed over the last decade. With such small numbers, repeat offenders with a string of retractions can quickly alter the balance from year to year. The team led by John Budd at the University of Missouri has yet to conclude its analysis of some 1,112 retracted articles from 1997-2009.
Still, an unpublished and informal subgroup analysis from Wager’s paper suggests that retractions for redundant publication might be increasing. In her sample, the proportion retracted for duplicate publication was around 24% between 2006-8, but only around 16% between 2003-5; there were none between 2001-2 and only one or two in the earlier years. And as for plagiarism, Skip Garner, at the University of Texas Southwestern Medical Center in Dallas, says that the text-matching software eTBLAST he introduced in 2008 has led to at least 96 retractions of older papers, though as of last year only 46 of those had been recorded in Medline and so were counted in the statistics. “In the year it was introduced, our technology at least doubled plagiarism-related retractions,” he claims.
Further grist to the ‘plagiarism’ mill is hinted at by data provided by Thomson Reuters, which shows just how sharply retractions have increased from China and India in the past few years. The chart describes the national origin of retracted articles, 2001-2005 and 2006-2010, in the Web of Science.
As you can see, the overall picture is largely the same as that found by Grant Steen in his analysis of PubMed retractions, noted on this blog last year but analysed more thoroughly in this blog post by Bob O’Hara.
An in-depth look at India’s retractions, by T. A. Abinandanan, a materials professor at the Indian Institute of Science in Bangalore, concludes that 45 of India’s 70 retractions in PubMed over the past decade were for plagiarism. (His paper is here, and was also covered in this news story by Nature India). So, here are hints that the gradual diffusion of software for detecting plagiarism and image manipulation across the publishing world may have particularly contributed to a rise in retractions.
For those who are interested in trends from journals, I also found that while Nature and Science and other high-impact journals have the most retractions, data from Thomson Reuters show that they tend to spread those out evenly over the past decade. The change in the past 5 years has come from journals with lower impact factors.
Many people I spoke to wondered if the data suggest we are seeing a real increase in fraud or sloppy mistakes. This is a valid concern. But, as I say in the feature, the number of retractions is so small, compared with the misconduct that surveys suggest goes on, that it is too fragmentary for any useful conclusions to be drawn about the overall rates of sloppiness or misconduct. Science as a culture very rarely bothers to update the papers it puts into the written record. Disputes, errors and misconduct are surely more widespread than the numbers of retractions and corrections suggest.