Jennifer Sakhnovsky, MA, JAMA Network
The short answer is yes, but with a caveat: transparency is key. It is no secret that generative artificial intelligence (AI) models can create various types of content, including text, images, audio, and video. However, people’s feelings about using these tools in scientific research are mixed, with some academics showing concern and others embracing the new technology.
Regardless of personal opinion, people are using these tools—a 2023 Nature survey of more than 1600 scientists reported that nearly 30% reported using generative AI tools to assist with writing manuscripts.1 As 2023 began, many research articles already listed the generative AI tool ChatGPT as an author.2 By October of the same year, 87 of the 100 highest-ranked scientific journals saw the need to publish online guidance for authors on generative AI use for content creation at their publications.3
JAMA and the specialty journals in the JAMA Network were among those that provided online guidance, encouraging authors, reviewers, and editors to be transparent, responsible, and follow AI best practices in medical and scientific publishing. Importantly, the guidelines noted that “nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.”4 More information on ethical and legal considerations can be found in chapter 5.1.12 of the AMA Manual of Style.
If authors choose to use AI tools to create content or assist with manuscript creation, they must disclose such use in the Methods or Acknowledgements section of the article. The following example, found in chapter 3.15.13 of the AMA Manual of Style, can be used as an acknowledgment for an article that uses generative AI:
“The authors acknowledge using ChatGPT (GPT-3.5, OpenAI) for text editing to improve the fluency of the English language in the preparation of this manuscript on September 15, 2023. The authors affirm that the original intent and meaning of the content remain unaltered during editing and that ChatGPT had no involvement in shaping the intellectual content of this work. The authors assume full responsibility for upholding the integrity of the content presented in this manuscript.“
As presented in this example, the following information must be included in the disclosure of AI use for content generation:
- Name of the AI software platform, program, or tool;
- Version and extension numbers;
- Manufacturer;
- Date(s) of use; and
- A brief description of how the AI was used and on what portions of the manuscript or content.
In addition to the above considerations, authors should provide the following additional information if AI was used in the study:
- Prompt(s) used, their sequence, and any revisions;
- Institutional review board/ethics review, approval, waiver, or exemption;
- Methods or analyses included to address and manage AI-related bias and inaccuracy of AI-generated content; and
- Adherence to a relevant reporting guideline if followed.
These guidelines emphasize accountability and human oversight when AI is used in medical publishing. To assist authors with adhering to new policies regarding AI, the JAMA Network’s automated manuscript submission system asks all authors whether AI was used for content creation.5 If AI tools were used to generate creative content (noncreative content, such as basic grammar and spelling checks, does not need to be disclosed), authors must provide specific information about their use and take responsibility for the integrity of the AI tools’ outputs.
JAMA Network authors are also asked to be aware of inputting identifiable patient information into an AI model, as well as potential copyright and intellectual property concerns. Limitations of AI tools should be included in an article’s Discussion section, including potential inaccuracies or biases, and, ideally, how these have been managed by the authors.
The JAMA Network also encourages authors to consult relevant EQUATOR guidelines (https://www.equator-network.org) depending on the type of study and AI use,4 including the following:
- Reporting guidelines for clinical trial reports for interventions involving artificial intelligence (CONSORT-AI);
- Guidelines for clinical trial protocols for interventions involving artificial intelligence (SPIRIT-AI);
- Minimum information about clinical artificial intelligence modeling (MI-CLAIM);
- Checklist for Artificial Intelligence in Medical Imaging (CLAIM);
- MINimum Information for Medical AI Reporting (MINIMAR) for developing reporting standards for AI in health care; and
- Updated guidance for reporting clinical prediction models that use regression or machine learning methods (TRIPOD-AI).
At the time of writing this blog post, several reporting guidelines are under development by the EQUATOR Network, including the following:
- Preferred Reporting Items for Systematic Reviews and Meta-Analyses – Artificial Intelligence Extension (PRISMA-AI);
- Reporting Guidelines for Diagnostic Accuracy Studies Evaluating Artificial Intelligence Interventions (STARD-AI Extension);
- ChatGPT and Artificial Intelligence Natural Large Language Models for Accountable Reporting and Use Guidelines (CANGARU);
- The Chatbot Assessment Reporting Tool for Clinical Advice: A Reporting Checklist for Chatbot Assessment Studies (CHART);
- Reporting items for ChatGPT and other similar cHatbots usEd in mEdical Research (CHEER);
- TRIPOD+LLM; and
- Reporting Guidelines for Artificial Intelligence Research in Mental Health.
As AI tools continue to gain momentum and develop rapidly, editorial leaders of scientific journals are wise to guide the responsible use of such tools. This guidance may—and likely will—evolve over time. Like other publishers, the JAMA Network has moved expediently to publish AI usage guidelines. As is true for the journal’s other style rules, authors who publish in JAMA and the JAMA Network specialty journals will be guided to follow these guidelines into the future.
References
- Van Noorden R, Perkel JM. AI and science: what 1,600 researchers think. Nature. 2023;621(7980):672-675. doi:10.1038/d41586-023-02980-0
- Mazzoleni S, Ambrosino N. How artificial intelligence is changing scientific publishing—unrequested advice for young researchers II. Pulmonology. 2024;30(5):413-415. doi:10.1016/j.pulmoe.2024.04.011
- Ganjavi C, Eppler MB, Pekcan A, et al. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ. 2024;384:e077192. doi:10.1136/bmj-2023-077192
- Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329(8):637-639. doi: 10.1001/jama.2023.1344
- Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA. 2023;330(8):702-703. doi:10.1001/jama.2023.12500
November 27, 2024