Nothing undermines the credibility of science and scientists more than the retraction a high profile paper. Earlier this year there was a prime example of that when researchers at one of Japan’s most prestigious research institutions, the Riken Center for Developmental Biology in Kobe, had to retract a study that had gathered worldwide attention. The study, about a new method for creating embryonic-like stem cells called stimulus triggered acquisition of pluripotency or STAP, was discredited after it was discovered that the lead author had falsified data.
Publication retractions have increased dramatically in recent years [Credit: PMRetract]
The STAP incident drew international coverage and condemnation and raised the question, how common is this and what can be done to combat it? A panel discussion at the World Stem Cell Summit in San Antonio, Texas entitled “Reproducibility and rigor in research: What have we learned from the STAP debacle” tackled the subject head on.
Ivan Oransky, medical journalist and the co-founder of the website Retraction Watch posed the question “Does stem cell research have a retraction problem?” He says:
“The answer to my question is yes. But so does everyone else. All of science has a retraction problem, not just stem cells.”
Oransky says the number of retractions has doubled from 2001 to 2010. One author has retracted 183 times – the record so far – but to break into the top 5 you need to have at least 50 retractions. These come from all over the world from the US to Germany and Japan and most recently Azerbaijan.
Oransky says part of the problem is the system itself. Getting your research results published is critical to advancing a career in science and those kinds of pressures force people to cut corners, take risks or even just falsify data and manipulate images in order to get a paper into a high profile journal. In most cases, journals charge a fee of several hundred to thousands of dollars to publish studies, so they have no incentive to dig too deeply into findings looking for flaws, as it might undermine their own business model.
“Some authors, more than 100, have been caught reviewing their own papers. When the journal they were submitting their paper to asked for the names of recommended reviewers they would submit the names of people who are legitimate reviewers in the field but instead of giving real email addresses they would give fake email addresses, ones they controlled so they could submit their own reviews under someone else’s name.”
What gave them away is that all the potential “reviewers” didn’t first reply and say “yes, I’ll review”, instead they responded by sending back a full review of the paper, raising suspicions and ultimately to detection.
Graham Parker, a researcher at Wayne State University School of Medicine and the editor of Stem Cell and Development says spotting the problem is not always easy:
“As an editor I regard scientific misconduct as fabrication, falsification or plagiarism of data but there are lots of other areas where it’s not always so clear – there are often shades of gray”
He says researchers may make an honest mistake, or include duplicative images and in those cases should be allowed to fix the problems without any stigma attached. But when serious cases of falsification of data are uncovered they can have a big impact by retarding scientific progress and sapping public confidence in the field as a whole.
Jeanne Loring, a stem cell scientist at The Scripps Research Institute and a recipient of funding from CIRM, says the STAP incident was actually a sign of progress in this area. Ten years ago when a Korean researcher named Hwang Woo-Suk claimed to have cloned human embryos it took more than a year before he was found to have falsified the data. But in the STAP case it took a little over a week for other researchers to start raising red flags:
“One of the real heroes in this story is Paul Knoepfler (a CIRM-funded researcher at UC Davis) who takes on difficult issues in his blog. It took Paul just 8 days to post a request for people to crowdsource this study, asking people who were trying to replicate the findings to report their results – and they did, showing they failed over and over again”
Parker said it’s getting easier for editors and others in the field to double check data in studies. For example new software programs allow him to quickly check submitted manuscripts for plagiarism. And he says there is a growing number of people who enjoy looking for problems.
“Nowadays it’s so easy for people to dig very deeply into papers and check up on every aspect of it, from the content to the methodology to the images they use and whether those images were in any way manipulated to create a false impression. Once they find a problem with one paper they’ll dig back through papers in that scientist’s past to see if they can find other problems dating back years that were never found at the time.”
He says that in most cases researchers caught falsifying data or deliberately misleading journals faced few consequences:
“Often the consequences of misconduct are very mild, the equivalent of a slap on the wrist, which does not discourage others from trying to do the same.”
Each panel member says that tougher penalties are needed. For example, in extreme cases a threat of criminal action could be warranted, if the falsified research could lead to serious consequences for patients.
But the panel ended on an encouraging note. Oransky says, for example, that medical journals are now paying more attention and imposing stricter rules and he says there’s even scientific evidence that “doing the right thing might pay off.”
“One study recently showed that if you made an honest error and corrected it publicly not only does the stigma of retraction not apply to you, you don’t get a decrease in your citations—you actually get an increase. So we’d like to think that doing the right thing is a good thing and might actually be a positive thing.”