Publisher policies on AI use: is it time for change?
KEY TAKEAWAYS
- The increasing use of AI tools in academic publishing calls for policies that keep pace with the myriad ways that authors and researchers use AI.
- An AI risk register that looks at specific risks inherent in individual tools and the ways they are used, plus collaboration among publishers to create standardised guidance, could be the key.

Protecting the integrity of the scientific record becomes more challenging as the role of AI in academic publishing expands. In a recent article for The Scholarly Kitchen, Avi Staiman expresses his concerns about the lack of adequate publisher policies on AI use and sets out what publishers could do to step up their game.
Where do current policies come up short?
Staiman reports that while authors are eager to implement AI, most lack the expertise to navigate its full potential while protecting research integrity. For instance, Oxford University Press (OUP) reported that 76% of researchers use AI in their research, but 72% are also unaware of their institution’s policies on AI.
76% of researchers use AI in their research, but 72% are also unaware of their institution’s policies on AI.
Alongside this, publishers’ struggles to keep up to date with the latest developments in AI hamper the development of suitable guidelines. Limitations of current policies include:
- lack of clarity on the roles of authors versus AI in individual cases (for example, who created the content vs who refined it)
- failure to consider the wide range of available AI tools and their differing uses (substantive vs non-substantive AI use)
- oversimplified AI policies that equate to blanket disclosure statements on the use of AI only, rather than looking at what was used and how.
Staiman argues that, given the diversity of AI tools that now exist — from those capable of performing statistical analysis, such as JuliusAI, to those assisting with literature searches, like Scite — the ways in which we tackle transparency and regulation need to evolve.
How can publisher AI policies keep pace with AI technology?
To this end, and inspired by the EU AI Act, Staiman suggests formulating an ‘AI risk register’ that assigns AI tools a level of regulation that matches both the potential risk inherent in that tool and the way it is being used in research. He also recommends 8 practical actions for publishers:
- Develop standardised guidelines
- Update guidelines continuously
- Establish transparent and inclusive governance
- Boost learning on AI within individual organisations
- Assign different risk levels to AI tools
- Classify AI tools based on the type of use level of verification required
- Define clear roles for authors and AI
- Consider how to monitor and enforce AI policies
Staiman calls upon publishers to rapidly collaborate so that AI policies can keep pace with the fast-moving changes in AI technology.
————————————————–
Categories
