Leading science journals, including Science and Nature, have banned the use of ChatGPT, an AI-powered chatbot developed by OpenAI, in scientific papers and publications. The journals have updated their editorial policies to specify that text generated by ChatGPT cannot be used in research work, nor can figures, images, or graphics be products of such AI tools. Additionally, an AI program cannot be listed as an author.
Holden Thorp, the editor-in-chief of Science, says that this policy update constitutes scientific misconduct, just as plagiarism or altered images would. Nature similarly states that researchers and publishers need to lay down ground rules for using Large Language Models (LLMs) ethically. The journal requires researchers to document their use of LLMs in their methods or acknowledgments sections.

The academic community has expressed concerns over the rapid rise of ChatGPT and AI tools like it. Critics argue that the chatbot’s ability to understand natural language and respond in a human-like manner raises questions about accountability and the ethics of using AI in research. By banning ChatGPT from being listed as a co-author or used in research, these journals aim to set a clear standard for responsible AI use in the scientific community.