Last year in our The Year in AI blog post, we looked at how Artificial Intelligence developments were impacting scholarly publishing, reflecting on the biggest headlines and stories from each month. Conscious of the evolving impact of AI, we are turning our attention to its direct offspring, Research Integrity.

In a world of misinformation, plagiarism, AI-generated fake articles, paper mills and predatory journals, where the academic community is under very real pressure to publish more volume, to process research quickly and to reach larger audiences, publishers have had to evolve drastically to confront this new reality. As a byproduct of this rapidly changing environment and breakneck speed of change, conversations around maintaining academic rigor and trust in science have naturally become progressively urgent.

Whether its dominating conference programs, the central focus of experts’ op-eds, the driving force behind product development for publishers, or a popular subject on this very blog, “Research Integrity” is a term that has very much risen to the fore. It is symptomatic of our troubling times, of an industry clashing with external forces, while striving to preserve its credibility and keep a central pillar of its very existence from crumbling.

In this post we look back at some of the most important Research Integrity touchpoints from 2025, providing a month-by-month account of some of the most talked about developments we have seen.

January – Sneaky citations

Kicking off the year with controversy, OA journal the International Journal of Innovative Science and Technology faced disciplinary action after research fraud sleuths revealed over 80,000 “sneaked” citations had been inserted into its metadata without corresponding references in research papers. Bibliographic group Crossref moved quickly to discipline the journal and has set about revoking its trusted status.

February – A retraction frenzy

Retraction Watch reported a surge in high-profile retraction cases, headlined by Neurosurgical Review, which retracted scores of commentaries and letters to the editor having been inundated by AI-generated manuscripts. Within just a few weeks, the journal had reportedly retracted 129 papers, mostly affiliated to academics from Saveetha University in Chennai, India.

March – Embracing AI

An article in Inside Higher Ed reported on the growing trend among major  academic publishers of using AI to assist with integrity checks. With Wiley, Elsevier and Springer Nature all announcing the adoption of AI-powered tools and guidelines early in the year, the article explained how the technology was increasingly being used to detect issues like plagiarism, fabricated content, and citation problems during manuscript screening and peer review.

April – Detecting non-disclosure

Although their policies tend to vary, most publisher guidelines state very clearly that authors should disclose or flag any use of generative AI in their scientific papers. But this doesn’t necessarily happen. In April, eagle eyed sleuths detected “hundreds” of cases whereby AI tools seem to have been used without disclosure. In some cases, the telltale signs of AI use were corrected, while in others, papers were silently removed.

May – Tenure revocation

In an unprecedented move, Harvard Business School fired professor Francesca Gino following claims she falsified ethics research data. This was the first time in almost 80 years that the Ivy League institution had revoked tenure from one of its academics and highlighted the seriousness with which top universities are treating research fraud. Harvard has refused to comment on the case and Gino has protested her innocence.

June – The Integrity Index

Offering a contrasting perspective to traditional university rankings, which focus primarily on publication counts and citation impact, the Research Integrity Index (RI²) was devised to assess institution-level vulnerabilities in research integrity. Launched by a Beirut-based academic, the Index tracks retractions, problematic journal publications and anomalous citation patterns and has been heralded as a game-changing development that underscores the need to “rethink how research performance is measured to safeguard academic integrity and mitigate gaming behaviors.”

July – Peer review manipulation exposed

One publisher’s research integrity auditing team uncovered a large peer review manipulation network comprising around 35 authors and editors who were found to be engaging in “undisclosed conflicts of interest and citation manipulation” across multiple journals. Over 122 articles were subsequently retracted and more than 4,000 papers published across multiple journals and publishers were deemed by the team as warranting closer scrutiny.

August – Focus on fake papers

A comment piece in The Times Higher Education from editor Seongjin Hong laid bare the stark realities of the AI threat to academic publishing. While observing an influx of AI-generated manuscripts that mimic previously published studies, he examined how these fake papers may become increasingly sophisticated and subsequently more difficult to detect. “It is now abundantly evident that AI is disrupting the flow of scientific publishing in multiple ways,” he concludes.

September – Europe’s paper mill powerhouse

An investigation reported by several media outlets claimed that a Ukrainian linked paper mill was responsible for producing 1,500 fraudulent papers, published across approximately 380 journals. This large body of suspect papers could represent one of the largest documented paper mill operations in Europe and epitomizes the global trend of fraudulent and fake scientific outputs threatening research integrity.

October – Image integrity

In October, one of the highest profile research integrity headlines centred on a cluster of retractions linked to alleged image manipulation by researchers at Duke University. A total of eight papers published by two emeritus researchers were retracted over “image duplications and manipulation” highlighting the ongoing tensions in the industry over how best to detect and police visual data problems.

November – A fallen giant

In a move that sent shockwaves through the industry, leading scientific journal Science of the Total Environment was delisted from Clarivate’s Web of Science database. Following evidence of irregular publication practices, compromised peer review and “quality and integrity concerns”, the case spotlighted challenges around high-volume publishing practices and pressures and commercial incentives in academic publishing.

Reflecting on a tumultuous year, in which we’ve seen publishers respond to a seemingly endless barrage of controversies, threats and bad actors, we go into 2026 with renewed hope that the power of good will prevail and that research integrity will continue to be as foundational and vital as ever.

KnowledgeWorks Global Ltd. (KGL) is the industry leader in editorial, production, online hosting, and transformative services for every stage of the content lifecycle. We are your source for society servicesresearch integrity, intelligent automationdigital deliveryand more. Email us at info@kwglobal.com.

Go to Top