Industry Insights MedComms

Industry Insights: MedComms 7 December

Industry Insights: MedComms 7 December

  • Reading time:4 mins read

In this piece, we highlight three topics in the medical publishing industry that are sparking discussions. We consider the ethical use of AI in medical publishing, followed by recommendations for using AI in scholarly communication. We also explore the involvement of patients as stakeholders in medical research.

As the world embraces the advancing role of AI in healthcare, in the medical publishing industry a cautious approach is strongly advised. Authors Zhang and Zhang discuss the ethics of using AI in medical publishing. AI chatbots might be proficient at disseminating information and aiding user interaction, however, the critical focus on patient privacy, data security, and informed consent cannot be overstated. Transparent disclosure of AI involvementin content creation is essential to maintain the integrity of medical publications. Many studies have already documented ChatGPT as an author. But, the question of whether generative AI meets the authorship criteria set by the International Committee of Medical Journal Editors remains a subject of debate.

There is an urgent need for academic bodies to lead thorough discussions on the implications of AI-generated content in scholarly publishing, with an aim to develop comprehensive guidance. There may come a point where we can no longer take it for granted that what we are reading is written by a human author, or are we already there!

The World Association of Medical Editors (WAME) recently updated recommendations for using generative AI in scholarly communication. In response, publishers like ICMJE have introduced instructions about the AI-Assisted Technology, while Cambridge University Pressintroduced an inaugural AI research ethics policy. The main recommendations include not accrediting Large Language Models (LLMs) as authors, due to their inability to generate original ideas or take responsibility for work integrity. To guarantee accuracy and authenticity, AI-generated text should undergo cross-verification with trusted sources, with authors actively reviewing chatbot outputs for potential bias.

Transparent reporting is emphasised and requires authors to specify the involvement of a chatbot. The anticipated release of the CANGARU guidelines in March 2024, aims to standardise reporting methods for studies using LLMs. Adhering to these best practices enables the medical communications industry to leverage generative AI responsibly, ensuring ethical standards, transparency, and accurate information dissemination.

There is a growing trend to involve patients as vital stakeholders in research, even at scientific conferences. A ‘how to’ guide outlines a multi-stakeholder co-creation process, offering tips for developing public involvement activities for both researchers and patients. Beyond the traditional scientific discourse, involving patients at scientific gatherings fosters a more comprehensive understanding of healthcare challenges and solutions. Patient-centred care takes precedence as their first-hand experiences provide valuable insights, shaping discussions on treatment approaches, outcomes, and overall healthcare focus. Integrating patient perspectives can enhance the relevance and impact of scientific findings, aligning research with real-world needs. Inclusive conferences create a collaborative environment where healthcare professionals, researchers, and patients can exchange ideas, ultimately driving innovation and improving the quality of patient care. Recognising patients as crucial stakeholders not only empowers them but also amplifies the collective efforts to address medical complexities, leading to more effective and compassionate healthcare practices.

Elion Medical Communications