Abstract
Although Stefan Harrer's commendable contribution to the Lancet opens an important debate about the role of Large Language Models (“LLMs”) in healthcare and medicine,1Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine.eBioMedicine. 2023; 90104512https://doi.org/10.1016/j.ebiom.2023.104512Summary Full Text Full Text PDF PubMed Scopus (3) Google Scholar we believe that Harrer's argument would benefit from more nuanced distinctions between ethical properties of different LLM use cases that are critical to matching risk management to risk profiles and potential benefits of LLMs. It is important to note that LLMs' transformative potential is not merely technological, but moral: the principle of beneficence compels us to pursue the utility gains that LLM use can bring. Although he notes this potential, Harrer's framework would be improved by the inclusion of beneficence/utility, so that they can be properly weighed against competing values. Where AI ethics principles such as accountability or transparency direct us against developing or deploying LLMs that might save lives or minimize suffering, we must weigh up these competing imperatives carefully. To capture the utility of LLMs, we must take a risk-based and proportionate approach to applying the other principles in Harrer's framework. Consider the principle of safety: whilst clinical use of LLM applications would need to satisfy a very high safety threshold as they interact directly with patients, whereas any risks of error by AI applications in the drug discovery process are already adequately mitigated by the existing drug development safety ecosystem. The different levels of novel risk should occasion proportionate responses. The examples above illustrate that Harrer's framework must necessarily be supplemented by a more granular analysis of how these principles apply and interact in individual use cases. It is only a nuanced, use case-based approach that will capture the complexity of various LLM risk–benefit trade-offs and allow us to leverage their potential safely and proportionately. All authors contributed to the ideation and writing of the paper. All authors are employees of GSK. The authors received no funding for this letter. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicineLarge Language Models (LLMs) are a key component of generative artificial intelligence (AI) applications for creating new content including text, imagery, audio, code, and videos in response to textual instructions. Without human oversight, guidance and responsible design and operation, such generative AI applications will remain a party trick with substantial potential for creating and spreading misinformation or harmful and inaccurate content at unprecedented scale. However, if positioned and developed responsibly as companions to humans augmenting but not replacing their role in decision making, knowledge retrieval and other cognitive processes, they could evolve into highly efficient, trustworthy, assistive tools for information management. Full-Text PDF Open Access
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.