Only KGL Logo

In years to come, 2024 will likely be remembered as the year Artificial Intelligence truly exploded in academic publishing—a defining period that saw the industry move beyond the “experimental phase” and start to implement the technology available to boldly reshape its future.

From OA to peer review and citation analysis to discoverability, AI is now proliferating every aspect of the publishing ecosystem, while concurrently increased efforts are being made to confront some of the ethical concerns and challenges that AI brings. For better or for worse, keeping up with the numerous launches, partnerships, roll outs and guidelines being issued has become an almost impossible challenge.

In this post we cast our eyes back on some of the most exciting AI stories that hit the headlines this year, providing a month-by-month account of the most talked about developments that will likely transform the industry for years to come.

January – Universal Standardization

As the use of generative AI skyrockets among the research community, publishers have somewhat hastily been updating their policies to dictate how authors should and shouldn’t use GPTs and other AI tools in their research. With publishers each going their own way and laying down the law in different ways, the landscape has become increasingly confusing. In January, a group of researchers, with the backing of Committee on Publication Ethics (COPE) and the proposed collaboration of 100,000 academics, announced its plan to standardize AI-related guidance across the industry. While the CANGARU Guidelines Initiative is just one of many attempts at standardization to be launched in recent years, it is perhaps the most ambitious and credible to date.

February – Spotlight on AI in Peer Review

Peer review has long been considered the area of scholarship most primed for disruption at the hands of generative AI, and in early 2024 we began to see how this would likely play out. Several larger publishers, including Elsevier and Taylor & Francis, began to integrate AI into their peer review processes, using new tools to assist with reviewer selection and matching across their publications. This development is expected to significantly reduce delays in the peer review process, optimize reviewer workload distribution and ensure more accurate and reliable matches, all in all improving the overall quality of reviews in the long run.

March – LLM Abuse Case Goes Viral

A paper published in the journal, Surfaces and Interfaces sparked a widespread ethics debate for including the familiar phrase “certainly, here is a possible introduction for your topic” in its introduction. The apparently peer reviewed paper, which had evidently been written, edited or assisted by an LLM caused immediate controversy, not because LLMs are forbidden by the publisher, but because the authors had failed to declare LLM usage in their submission, something that very clearly does go against the journal’s policies. The unfortunate incident led to a social media pile-on with hundreds of academics commenting on the controversy and speculating on the future of academic publishing and AI’s role within it.

April – Boosting Citation Analysis

With a host of new launches and product updates from the likes of Scopus and Google Scholar, April was a crucial month in the rapidly evolving landscape of citation analysis. Incorporating AI technologies, citation analysis tools have become more sophisticated than ever, with researchers now able to gain deeper insights into the impact of their work far beyond simple citation counts. These important updates would now allow researchers to really delve into the context of citations, exploring the quality of journals citing a specific article, the geographic diversity of citations and the influence of citing publications, for example.

May – Paper Mill Crackdown

As we discussed in a recent blog post, paper mills pose a significant threat to the integrity of academic publishing. In May, one major publisher took a decisive step to tackle the paper mill problem head on and send an important message to the industry. Announcing the closure of 19 journals that were overseen by a subsidiary as part of an integration plan, the publisher continued its war on paper mills, having closed four journals “to mitigate against systematic manipulation of the publishing process” in 2023. The announcement thrusted concerns around phony research and fake papers into the spotlight and highlighted the uphill challenge publishers face in combatting the issue.

June – AI Tools to the Rescue

Taking the battle to the paper mills and preventing fake research and problematic images from being published are becoming huge priorities for publishers. In June, Springer Nature unveiled Geppetto and SnappShot – two new proprietary, AI-powered tools to help the publisher detect AI-generated content and analyse image integrity. Commenting on the launch, Chris Graf, Director of Research Integrity at Springer Nature, said: “Our new tools are helping us to stay one step ahead of the fraudsters and ensure that the research we published is robust and can be trusted to be used and built on.”  With research integrity now under more threat than ever before, publishers are increasingly investing in external support to help them confront the problem, or in Springer Nature’s case, developing products and solutions in-house.

July – The Era of the Big AI Deal Begins

AI is a lucrative business, and in July three of the larger academic publishers revealed just how far technology companies were willing to go to train their AI models, and how much they were prepared to pay publishers for the privilege of assisting them. Wiley, Oxford University Press and Informa were the first in the industry to announce big money deals with the likes of Microsoft and other tech firms, giving them access to academic content and other data to “improve the relevance and performance” of their AI systems. Sparking new debate in the industry, particularly around protecting author rights, copyright and remuneration, similar deals have since become more widespread with newspaper and magazine publishers like Condé Nast and trade publishers such as HarperCollins also jumping on the bandwagon.

August – Trouble in the Cosmos

Popular Australian Science magazine, Cosmos, attracted attention when it made the decision to publish six AI-generated explainer articles, on topics ranging from black holes to carbon sinks, over the summer. The move drew criticism across scientific and journalistic professions, particularly among the magazine’s contributors who claimed their prior work had been used to train a custom LLM model without their permission or consultation. According to the magazine the articles were part of an experiment that is under “continual review”.

September – Big Steps in Fake Science Detection

A new AI tool developed by a visiting research fellow at the State University of New York, reported that it could achieve a 94 per cent accuracy rate in detecting fake research. xFakeSci, which launched in the summer, reported that in a recent trial it outperformed common data mining algorithms, which tend to score accuracy rates of between 38 and 52 per cent. With these impressive results the founder hopes that the new tool will play a vital role in “preserving the authenticity and reliability of scientific communication.”

October – Taking an Opt-in Approach

In the wake of all the controversy that swept through the industry during the summer months, Cambridge University Press & Assessment (CUPA) took a slightly different approach when announcing its big AI deal at Frankfurt Book Fair in October. The publisher adopted an opt-in policy ahead of licensing content to tech firms to train their LLMs, contacting 20,000 authors to seek permission and explain in detail how their work will be used. According to MD Mandy Hill this decision inevitably made the process harder but added that “the author relationship is too important.”

November – a Slow Embrace

A report in Inside Higher Ed, based on a recent survey and Ithaka S+R study into the adoption of generative AI in scholarly publishing, highlighted how the industry is grappling with the many new tools and applications at its disposal. The study highlighted how AI is helping to streamline back-end publishing tasks such as editing and peer reviewing but found that the research community is being slower and more cautious about adopting AI. The article reported that “Researchers are already raising concerns that the freely accessible information—some of which hasn’t undergone rigorous peer review—used to train some large language models (LLMs) could undermine the integrity of scholarly research.”.

A week is a long time in AI. Never before has this phrase been more appropriate than in 2024. As the year draws to a close, and we take stock of just how far we’ve traveled during this relentless and transformative period of time, we can only imagine what kind of academic publishing landscape we’ll be looking back upon this time next year.

KnowledgeWorks Global Ltd. (KGL) is the industry leader in editorial, production, online hosting, and transformative services for every stage of the content lifecycle. We are your source for peer review servicesmarket analysis, intelligent automationdigital deliveryand more. Email us at info@kwglobal.com.

Go to Top