A group of investigators devoted to finding errors in scientific research has shocked some of the world’s most prestigious research institutions and the scientific community as a whole.
The highly publicized case of alleged image manipulation in a paper co-authored by a former Stanford University president and a leader at the Dana-Farber Cancer Institute has garnered national media attention, prompting speculation from top scientific leaders that this may only be the beginning.
“At this pace, we’re going to see another paper published every few weeks,” said Holden, the editor-in-chief of Science, one of the world’s two most influential journals. Mr. Thorpe said on site.
Investigators argue that their work is necessary to rectify the scientific record and prevent generations of researchers from pursuing futile avenues due to flawed papers. Some scientists are calling for universities and academic publishers to reform their approach to addressing flawed research.
“I understand why the investigators who discovered these issues are so furious,” said Michael, a biologist and former editor of the journal eLife, as well as a prominent advocate for reform in scientific publishing. “Authors, journals, institutions, everyone is incentivized to downplay their significance,” Eisen said.
For approximately a decade, investigators identified widespread problems with scientific images in published papers and voiced their concerns online, but received little attention. Last summer, neuroscientist and then-Stanford University President Marc Tessier-Lavigne resigned amid scrutiny over allegations of image manipulation in a study he co-authored and a report criticizing his lab culture. Since then, there has been a noticeable shift. While Tessier-Lavigne himself has not been found to have engaged in any misconduct, members of his lab appear to have manipulated images in questionable ways. Thereport from the scientific panelhired to investigate the allegations stated.
In January, a blogger’s scathing post exposed questionable research by top leaders at the Dana-Farber Cancer Institute, which subsequently retracted six papers and requested corrections to dozens more.
In hisresignation statement, Tessier-Lavigne stated that the committee could not find any evidence that he was aware of any misconduct, and that he had never submitted documents that appeared to be inaccurate. In a statement from its research integrity officer, Dana-Farber stated that it had taken decisive action to correct the scientific record and that discrepancies in the images were not necessarily evidence that the authors were attempting to deceive.
“We are experiencing a moment of public awareness that really turned a corner when the Mark Tessier-Lavigne scandal unfolded, and since then the Dana-Farber scandal has been the most recent and continuous,” Thorpe said.
This long-standing issue is now receiving national attention, with the emergence of new artificial intelligence tools that are helping address problems ranging from longstanding errors and sloppy science to unethically manipulated images in photo-editing software, making it easier to spot various issues.
This increased scrutiny is prompting changes in how some publishers operate. Universities, journals, and researchers are being urged to consider new technologies, the potential backlog of undiscovered errors, and methods for enhancing transparency when problems are identified.
This comes at a challenging time in academic circles. Venture capitalist Bill Ackman, in apost last month on X, discussed the use of artificial intelligence to identify plagiarism by leaders of top universities with ideological differences, and raised questions about political motivations in plagiarism investigations. More broadly, public trust in scientists and science has steadily declined in recent years, according to thePew Research Center.
Eisen stated that he does not believe investigators’ concerns about scientific images veer into “McCarthyist” territory. “I think they’re honing in on a specific type of problem in the literature, and they’re right. That’s bad,” Eisen said.
Scientific publishing is the primary means by which scientists establish a foundation of understanding in their fields and share new discoveries with their colleagues. Before publication, scientific journals review submissions and solicit feedback from researchers outside the field to identify errors or faulty inferences, a process known as peer review. Journal editors evaluate research findings for plagiarism and conduct copy editing prior to publication. While this system is not perfect, it still relies on the good faith efforts of researchers to avoid manipulating research results.
Over the past 15 years, scientists have become increasingly concerned that some researchers are digitally altering images in papers to distort or enhance their results. The field of image integrity screening has expanded significantly since Yana Christopher, a scientific imaging expert with the European Federation of Biochemical Societies and its journals, began working in the field nearly 15 years ago. At the time, “no one was doing this, and people were in denial about research misconduct,” Christopher stated. “The prevailing belief was that it was very rare and that instances of manipulating results were few and far between.”
Scientific journals now employ entire teams dedicated to processing images and ensuring their accuracy. The number of retractions of published papers has increased significantly in recent years, with records indicating that over 10,000 papers were retracted last year, according to aNature analysis. A loose collective of scientific investigators applies external pressure, often identifying and flagging errors and potential manipulation on the online forum PubPeer. Many of these investigators receive little or no compensation or public recognition for their work.
“There’s a certain level of urgency,” Eisen stated. Ananalysis of comments on over 24,000 articles posted on PubPeer revealed that over 62% of PubPeer comments were related to image manipulation. For years, investigators relied on keen observation, pattern recognition, and a grasp of photo manipulation tools. In recent years, artificial intelligence tools capable of scanning documents for anomalies have been rapidly developed and improved.
Scientific journals are now utilizing similar technology to detect errors prior to publication. In January, Science announced that it is using an artificial intelligence tool called Proofig to scan papers undergoing editing and peer review for publication.Science editor-in-chief Thorpe stated that the family of six journals quietly incorporated the tool into their workflows approximately six months prior to the January announcement. The journal previously relied on visual inspection to identify these types of issues. During the editing process, Proofig flagged papers that had not yet been published, citing “logical explanations” for problematic images that were difficult to justify, or issues that the authors had addressed prior to publication. “Less than 1% of errors are significant enough to prevent a paper from being published,” Thorpe stated.
Chris Graff, director of research integrity at publisher Springer Nature, stated that the company is developing and testing “in-house AI image integrity software” to identify duplicate images. Graff’s research integrity department currently uses Proofig to assess papers in case concerns arise post-publication. The testing process varies among journals, but some Springer Nature publications use Adobe Photoshop tools to manually identify image manipulation and conduct experiments to visualize cellular components or general discrepancies in raw scientific experimental data.
“Although AI-based tools can facilitate and scale investigations, we still believe the human element is important in all investigations,” Graff stated, emphasizing that image recognition software is not infallible and that human expertise is necessary to guard against false positives and negatives. No tool can detect all mistakes and fraud.
“There are many facets to that process. You can never catch them all,” Thorpe remarked. “As journals, institutions, and authors, we need to do a better job of addressing this when it occurs.”
Many forensic scientists have grown frustrated that their concerns have been disregarded, or that investigations have progressed slowly with little public resolution. Sholto-David, who publicly voiced his concerns about the Dana-Farber study in a blog post, stated that the response from journal editors was so unsatisfactory that he nearly “gave up” on writing a letter to journal editors regarding the errors he had discovered. Elizabeth Bick, a microbiologist and longtime image investigator, said that if she reports image issues frequently, “nothing happens.”
While public comments on PubPeer questioning research data can stimulate discussion surrounding questionable research, authors and institutions often do not directly respond to online criticism. Although journals can issue corrections or retractions, it is generally the responsibility of research institutions or universities to investigate incidents. If the incident pertains to federally funded biomedical research, the federal Office of Research Integrity may conduct an investigation.
Thorpe stated that agencies need to assume responsibility when errors are discovered and act more swiftly to openly and candidly address what occurred to regain public trust. He stated, “The university has been very sluggish in responding, very slow in instituting the process, and the longer this goes on, the greater the damage will be. I don’t know what would have happened if Stanford had said these papers are flawed, instead of initiating this investigation.”
Some scientists are concerned that the problem of image manipulation is just the tip of the iceberg in terms of scientific integrity. Detecting issues with images is much simpler than spotting simple data errors in spreadsheets. While it is crucial to crack down on problematic papers and hold individuals accountable, some scientists believe that these measures address a larger problem: rewarding career advancement for those who publish the most exciting results rather than enduring results. “Scientific culture itself doesn’t say we care about getting it right. It says we care about getting papers that make a splash,” Eisen said.
Source: www.nbcnews.com