Abstract

BackgroundThe potential uses of artificial intelligence have extended into the fields of health care delivery and education. However, challenges are associated with introducing innovative technologies into health care, particularly with respect to information quality. ObjectiveThis study aimed to evaluate the accuracy of answers provided by a chatbot in response to questions that patients should ask before taking a new medication. MethodsTwelve questions obtained from the Agency for Healthcare Research and Quality were asked to a chatbot for the top 20 drugs. Two reviewers independently evaluated and rated each response on a 6-point scale for accuracy and a 3-point scale for completeness with a score of 2 considered adequate. Accuracy was determined using clinical expertise and a drug information database. After the independent reviews, answers were compared, and discrepancies were assigned a consensus score. ResultsOf 240 responses, 222 (92.5%) were assessed as completely accurate. Of the inaccurate responses, 10 (4.2%) were mostly accurate, 5 (2.1%) were more accurate than inaccurate, 2 (0.8%) were equal parts accurate and inaccurate, and 1 (0.4%) was more inaccurate than accurate. Of the 240 responses, 194 (80.8%) were comprehensively complete. There were 235 (97.9%) responses that scored 2 or higher. Five responses (2.1%) were considered incomplete. ConclusionUsing a chatbot to answer questions commonly asked by patients is mostly accurate but may include inaccurate information or lack valuable information for patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call