Abstract

Artificial intelligence and large language models (LLMs) have emerged as potentially disruptive technologies in healthcare. In this study GPT-3.5, an accessible LLM, was assessed for its accuracy and reliability in performing guideline-based evaluation of neuraxial bleeding risk in hypothetical patients on anticoagulation medication. The study also explored the impact of structured prompt guidance on the LLM's performance. A dataset of 10 hypothetical patient stems and 26 anticoagulation profiles (260 unique combinations) was developed based on American Society of Regional Anesthesia and Pain Medicine guidelines. Five prompts were created for the LLM, ranging from minimal guidance to explicit instructions. The model's responses were compared with a "truth table" based on the guidelines. Performance metrics, including accuracy and area under the receiver operating curve (AUC), were used. Baseline performance of GPT-3.5 was slightly above chance. With detailed prompts and explicit guidelines, performance improved significantly (AUC 0.70, 95% CI (0.64 to 0.77)). Performance varied among medication classes. LLMs show potential for assisting in clinical decision making but rely on accurate and relevant prompts. Integration of LLMs should consider safety and privacy concerns. Further research is needed to optimize LLM performance and address complex scenarios. The tested LLM demonstrates potential in assessing neuraxial bleeding risk but relies on precise prompts. LLM integration should be approached cautiously, considering limitations. Future research should focus on optimization and understanding LLM capabilities and limitations in healthcare.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call