Abstract

Abstract Aim Artificial Intelligence (AI) is a ground-breaking technological advancement known for its prowess in processing extensive datasets, gaining recognition among clinicians for its potential in improving patient care. While AI is successfully employed in analysing medical imaging, its use in developing patient information literature (PIL) is limited. Patient access to information aligns with modern healthcare values, emphasizing shared decision-making, a priority in the NHS. This study, conducted under single-blinded conditions, explores the acceptability of AI-generated patient documentation. Method Four PIL examples related to common ENT procedures and conditions were identified. ChatGPT 3.5 modified existing PILs, adjusting for a UK reading age of 14. AI-generated PILs were formatted to match trust documents. Participants were blinded to leaflet sources. Results In 59 responses from ENT outpatients over six weeks, both AI and Trust leaflets had comparable appearance scores (AI 6.82, Trust 6.85). Although a slight preference for Trust leaflets existed, it wasn't statistically significant (p=0.47). In overall presentation preference, most participants had no preference (N=27), with one-third favoring Trust leaflets and 13 preferring AI-generated ones. In explaining conditions, over half had no preference (N=36), while 16 favored human-generated and 7 AI-generated leaflets. Readability and understanding were similar, with 25 participants finding both equally easy to read. Overall satisfaction was comparable, with 26 equally satisfied, 18 preferring human-generated, and 15 AI-generated leaflets. Conclusions AI technology integrated into clinical practice holds potential to transform PIL quality and quantity. As AI accessibility increases, healthcare services must guide the public to safeguard the standard of provided information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call