Industry Insight: AI

Industry Insights: AI 4 March

Industry Insights: AI 4 March

  • Reading time:3 mins read

This artificial intelligence (AI) Industry Insight focuses on the latest updates on the use of AI in medical communications. We highlight Elsevier’s editorial policy on the use of generative AI in publication. We also take a look at two separate studies with one demonstrating how AI outperforms doctors in summarising health records and the other demonstrating how AI outperforms humans in tests of creative potential.

Elsevier’s editorial policy on the use of AI in medical research publishing

This new policy has two sections focused on the use of generative AI and AI technologies in scientific writing, figures, images and artwork. Authors are asked to acknowledge that AI was used in writing the publication and generative AI should only be used to improve readability and language of the work. This should be done with human oversight and authors carefully reviewing the results, as the authors are ultimately responsible for the work. It is clearly stated that authors should not list AI and AI-assisted technologies as authors or co-authors. Generative AI or AI-assisted tools to create or alter images are not permitted.

AI outperforms humans in standardised tests of creative potential

In a study by researchers at the University of Arkansas, 151 human participants were compared against ChatGPT-4 in three tests designed to measure divergent thinking. The authors found that “Overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses.” There were multiple caveats to this conclusion, particularly that the creative potential of AI is completely dependent on the assistance of the human user.

AI outperforms doctors in summarising health records

This study published in Nature Medicine evaluated eight large language models (LLM) across four clinical summarisation tasks: patient questions, radiology reports, doctor-patient dialogues and progress notes. Ten physicians were involved in the  clinical reader study  and evaluated summary completeness, correctness and consciousness. In most cases, summaries from the best-adapted LLM were deemed either equivalent (45%) or superior (35%)to summaries from medical experts. The authors concluded that the integration of large language models into clinical workflows could improve documentation burden, thereby allowing clinicians to focus on patient care.

Elion Medical Communications