Only KGL Logo

by KnowledgeWorks Global Ltd.

For the last year and a half, the media and publishing world have generated a lot of buzz and discussion about one particular disruptive technology—artificial intelligence (AI). Though AI is not new to the publishing ecosystem, the launch of ChatGPT and the increased daily usage of Large Language Models (LLMs) for web research, writing assistance, and idea generation has raised questions (and problems) with how developer OpenAI trained its chatbot, how we are using this tech in our daily and professional lives, and what it all means for the future.

As an industry innovator in the use of AI to automate workflows and one of many entities experimenting with new applications of this technology, KGL is taking this opportunity at the start of the year to boldly forecast where AI will take scholarly publishing and beyond during 2024.

AI in Peer Review

From all over the world and many academic disciplines, journal submissions have grown sharply. Yet at the same time, editors are contending with diminishing reviewer pools, limited resources, and funding constraints, making the time-honored tradition of peer review more challenging than ever. Editors have been increasingly wondering whether AI can help solve these problems. While LLM-generated reviews are discouraged—last summer funding agency NIH outright banned the use of generative AI in the peer review process—it seems inevitable that journal publishers will start relying on AI tools to tackle some of the time-consuming, manual tasks associated with peer review, such as screening for required elements, identifying reviewers, checking for conflicts of interest, and assessing language quality for copyediting.

Increased Use of AI for Efficiency

Though AI and Natural Language Processing (NLP) have long been used to streamline publishing processes from pre-editing to triaging manuscripts, to composition, generative AI has enabled the industry to experiment with additional workflows to improve efficiency. Many common procedures are now on the table, including plagiarism checks, translations, video transcripts, assessment creation, coding, illustrations, alt-text generation, and more. This year, we expect to see wider adoption of the more successful AI trials by publishers and their suppliers looking to reduce or eliminate redundant and labor-intensive processes.

Bias in AI

As interest in AI has grown, so too have discussions around its limitations. The subject of bias in machine learning algorithms in particular has been garnering worldwide attention, as instances in which AI models have skewed data and propagated prejudice and inequity continue to grow. Addressing this major issue has become a priority for innovators, institutions, and governments alike, who see bias in AI as a barrier to widespread adoption, a cause for skepticism and distrust, and indeed potentially a dangerous influence on our own human behaviors.

Legislation Restricting AI Usage

In Europe, The Artificial Intelligence Act was passed last year which is designed to protect democracy and fundamental rights and battle against bias (as noted above) while still encouraging innovation. While significant legislation on AI has not been passed in other parts of the world, as usage continues to proliferate and more problems arise, we are likely to see other countries follow suit.

Adoption of AI Protocols

Prominent journal publishers like Elsevier have already put systems in place for how AI and AI-assisted technologies can be used in article submissions. We will continue to see more restrictions like this and suggested usage in scholarly publishing to prevent plagiarism and the proliferation of false data, while also taking advantage of how AI can improve research capabilities for non-English speakers and academics with fewer resources.

Continued Search for Ethical Ways to Use AI

It is hard to ignore the stories and use cases for how AI is benefiting science, but many researchers still do not trust the technology—De Gruyter’s recent survey showed the 78 percent of those who used AI had partial or unreliable results. But just because academics, and the publishers that serve them, have concerns about enforcing research integrity doesn’t mean they have ruled out adopting AI in the future. There will continue to be testing of the technology and guardrails put in place to find ways to apply AI ethically and improve scholarly communications for all stakeholders.

KnowledgeWorks Global Ltd. (KGL) is the industry leader in editorial, production, online hosting, and transformative services for every stage of the content lifecycle. We are your source for peer review services, market analysis, intelligent automation, digital delivery, and more. Email us at info@kwglobal.com.

Go to Top