Person Using Ai Tool Job

The stance of academic journals on the use of AI

The stance of academic journals on the use of AI

  • Reading time:10 mins read

With the rise of artificial intelligence (AI) tools such as ChatGPT, there is now increased scrutiny for its implications for authorship and use in medical writing. As AI is likely to become an integral part of the research process, understanding how prominent medical journals view submissions involving the use of AI tools is of paramount importance. This blog post summarises the stance of some of the largest medical journals with regards to the use of AI tools and technologies in manuscript submissions.

New England Journal of Medicine

The New England Journal of Medicine (NEJM) has a history spanning over 200 years, and is one of the most influential general medicine journals globally. With the rise of generative AI, the NEJM has decided to adapt policies specified by the International Committee of Medical Journal Editors (ICMJE) in relation to authorship and the use of AI.

These policies are generally accepting of the use of AI tools to assist writing, provided there is clarity around where and how such tools are used. However, the NEJM makes it clear that because authors are responsible for the accuracy, integrity, and originality of work, the principles of authorship cannot lie with AI, and they may not be named as authors. Authors who do use AI in their submissions must carefully review and edit work created by AI and prevent a submission that is factually incorrect, incomplete, or biased. Authors must also attribute any citations generated by AI to a direct source. Furthermore, authors are prohibited from citing any material generated using AI as a primary source.

Interestingly, the NEJM group generally appears to embrace the use of AI for clinical and scientific research. The group is currently planning a monthly online journal titled NEJM AI, which aims to bridge the fast-moving developments in AI, informatics, and technology in medicine with the application of these advancements to clinical practice.

Blood

First published in 1946, Blood is the most cited peer-reviewed publication in the field of haematology, and is published by the American Society of Hematology. There has been a recent addition of a paragraph on their editorial policies, which relates to the use of AI in manuscripts for the journal.

Their policy states that AI tools, such as ChatGPT, do not meet the eligibility criteria for authorship. Blood further states that for the purposes of maintaining confidentiality in manuscript submissions, such tools may not be used to write a review of a journal article.

Blood, however, does allow the use of AI for certain outputs in submissions. This includes its use for data acquisition or analysis, and for graphic generation and output such as figures (so long as it is specified in a legend). Interestingly, Blood does not allow any text content generated by AI.

Nature

Nature is a multidisciplinary science journal first published in 1869 and is one of the most influential weekly publications worldwide.

Nature’s AI policies relate mostly to generative AI, and state that large language models (LLMs) such as ChatGPT do not fulfil their authorship criteria, given that such designation carries with it an accountability for the work submitted. However, Nature does appear to allow the use of such models, provided that their use is included within the Methods section (or an otherwise appropriate section if no methods exist) of each manuscript.

In contrast to Blood, Nature does not allow the use of AI-generated images in any submission due to legal and ethical concerns surrounding intellectual property and copyright. There are certain exceptions, in that such images may be supplied by those with contractual agreements with Nature, or within submissions directly relating to AI (on a case-by-case basis). Nature does acknowledge that not all AI tools are generative and instead, may be used to optimise the quality of output such as figures and images. In these instances, use should be disclosed in the relevant caption upon submission to allow a case-by-case review.

The Lancet

The Lancet is one of the oldest and most influential general medical journals and is currently published by Elsevier. The Lancet’s guidance on Publishing Excellence states that where authors use AI in the writing process, these technologies should only be used to improve the readability and language of work and not to replace researcher tasks. These include the interpretation and analysis of scientific data, and the generation of scientific insights or conclusions.

The Lancet also states that the use of such technologies requires stringent oversight from the authors responsible. This includes editing any output generated by AI, given the potential for authoritative-sounding output that can be incorrect, incomplete, or biased. The Lancet also does not permit any AI tool to be listed as an author or co-author, and any use of AI must be stated at the end of each submission.

Annals of Internal Medicine

Annals of Internal Medicine is an academic medical journal published by the American College of Physicians, focusing on the field of internal medicine and its relevant sub-specialties. Annals is one of the most influential specialty medical journals worldwide. Within the listed Information for Authors section of their website, they state they follow the recommendations, policies, guidance, and processes related to research and publication ethics developed by the ICMJE, the Council on Publication Ethics, and the Council of Science Editors.

In turn, at the point of submission, Annals requires authors to attest to any use of AI in the production of submitted work. If used, authors are then required to describe how such technologies were used, both within the submission itself and the related cover letter. Annals states that AI cannot be listed as an author as it does not fulfil ICMJE requirements for authorship, and that ultimately, the authors are responsible for all submitted material including any content generated through the use of AI technologies.

JAMA

The Journal of the American Medical Association (JAMA) is a medical journal that publishes original research, reviews, and editorials covering all aspects of biomedicine. First published in 1883, JAMA is one of the most significantly-influential medical journals.

Recently, JAMA updated their Instructions for Authors to include considerations of AI technology. In it, they express that the submission and publication of any content using AI or similar technology is discouraged, unless its use is part of formal research design and methods. JAMA further states that even in such instances, the use of AI tools is still not permitted unless there is a clear description of what content was generated by AI, and how. This includes explicitly stating the name of the model or tool, version and extension numbers, and manufacturer. Authors are expected to take full responsibility for the integrity of any content generated and used.

With regards to authorship, JAMA states that AI tools or similar technologies do not qualify to be listed as authors. Furthermore, authors who use AI tools for submissions should report their use of such tools in the Acknowledgment or Methods section if part of formal research design or methods.

Interestingly, despite some discouragement towards the use of AI in submissions, JAMA appears to be receptive towards LLMs and generative AI in general for clinical applications, with a dedicated channel for its research papers.

Closing Thoughts

With the prominence of generative AI only increasing, publishers have taken proactive steps to address the challenges of ethical and academic integrity that such disruptive technology brings.

While the majority of journals understandably do not agree with the notion that AI can be attributed with authorship, there is a clear spectrum for the extent of acceptance of AI-generated materials in submissions. However, journals do seem unanimous in not accepting the submission of entire manuscripts or articles for review that have been written with the use of generative AI.

This perspective is unsurprising, due to associated confidentiality risks. Generative AI platforms such as ChatGPT use data generated during user interactions for various purposes, including as part of a wider dataset for their training models for system improvement. If a researcher inputs sensitive data into generative AI, future iterations of such platforms may involuntarily divulge unpublished information based on its training.

Currently, a blend of attitudes and perspectives are present within publishers. These guidelines are likely to evolve just as quickly as the technological advancements seen in generative AI. Writers and authors should take steps to be cognisant of the instructions set out by each publisher in relation to the use of AI prior to developing a manuscript. AI tools are here to stay, and authors and medical writers can stand to benefit from integrating such technologies in their workflow where possible.

However, it is also of utmost importance that we have ethical guidelines in place to govern the use of AI to ensure that academic integrity is not compromised. Transparent disclosures and acknowledgements of use should be included in submissions, and authors and medical writers must take steps to ensure the rigour of any AI-generated content they use.

There is a fine balancing act between maintaining academic integrity and adopting innovation. When this balance is established, there is potential for greater efficiencies in generating academic output, which can only serve to benefit the field of clinical research.

What are your thoughts on the increasing use of AI tools in academic and medical writing? Do you agree with any of the guidelines and recommendations from any of the above journals/publishers? Let us know in the comments below! You can find out more about us at elion.nz.

Photo credit

Adrian Lau