Abstract

Veterinary clinical narratives remain a largely untapped resource for addressing complex diseases. Here we compare the ability of a large language model (ChatGPT) and a previously developed regular expression (RegexT) to identify overweight body condition scores (BCS) in veterinary narratives pertaining to companion animals. BCS values were extracted from 4415 anonymised clinical narratives using either RegexT or by appending the narrative to a prompt sent to ChatGPT, prompting the model to return the BCS information. Data were manually reviewed for comparison. The precision of RegexT was higher (100%, 95% confidence interval [CI] 94.81%-100%) than that of ChatGPT (89.3%, 95% CI 82.75%-93.64%). However, the recall of ChatGPT (100%, 95% CI 96.18%-100%) was considerably higher than that of RegexT (72.6%, 95% CI 63.92%-79.94%). Prior anonymisation and subtle prompt engineering are needed to improve ChatGPT output. Large language models create diverse opportunities and, while complex, present an intuitive interface to information. However, they require careful implementation to avoid unpredictable errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call