Many publishers lack policies on using AI chatbots to write papers

Jeremy Y. Ng, a metascientist who works at the Centre for Journalology at the Ottawa Hospital Research Institute, and colleagues audited the publicly available policies of 163 members of the International Association of Scientific, Technical, and Medical Publishers (STM). The researchers report in a preprint article—published before peer review—that of those 163 members, only 56 had a policy on whether authors can submit papers written by AI chatbots (medRxiv 2024, 10.1101/2024.06.19.24309148). Forty-nine of those 56 publishers required authors to declare when they used chatbots, and none allowed researchers to list AI tools like ChatGPT as an author—which has happened in some cases.
“The use of AI chatbots in academic publishing is a new and rapidly evolving space,” Ng says. “The absence of industry-wide standards or guidelines also contributes to the slow adoption, as publishers may be hesitant to implement policies without a clear framework to follow.”
Four of the publishers surveyed had an outright ban on using chatbots. But one of those, the American Association for the Advancement of Science, which publishes Science, has since reversed its ban.
The analysis revealed that 19 surveyed publishers said researchers should not cite chatbots as primary sources. Eighteen allowed the use of chatbots in research methods such as for data organization, and 33 publishers permitted researchers to use chatbots to help write non-methods sections, including the background and introduction of manuscripts. Fourteen publishers said authors could use AI to generate images, while 15 allowed authors to use chatbots to proofread manuscripts.
“Every publisher should have a policy on the attribution and use of these automated tools because they are attractive to researchers, but their use has so many caveats,” says Matt Hodgkinson, ethics adviser at MyRA, a generative AI–powered app that suggests key themes from uploaded texts such as interview transcripts.
But Hodgkinson questions whether the numbers in the study are accurate. Some academic publishers aren’t STM members, he says, and many STM members are trade bodies and vendors rather than publishers and so won’t have policies on AI chatbots. “The authors [of the study] need to check every included ‘journal publisher,’ and accounting for these issues will seriously shift the percentages,” he says.
Hodgkinson says a key missing part of the study is the use of generative AI by peer reviewers or editors. Some studies have already demonstrated that peer reviewers may be using ChatGPT. “This is a major worry for the validity and rigor of peer review,” he says.
In previous work, Ng and his team found that many authors may not be familiar with AI chatbots, perhaps because of a lack of training. Ng suspects that as a result, researchers may also be unfamiliar with publishers’ policies on generative AI. “Continuous work in this field is crucial to develop evidence-based policies that ensure ethical standards, transparency, and quality are maintained,” Ng says.
Source link