Abstract

BackgroundConversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. However, the potential safety issues of using conversational assistants for medical information by patients and consumers are not understood.ObjectiveTo determine the prevalence and nature of the harm that could result from patients or consumers using conversational assistants for medical information.MethodsParticipants were given medical problems to pose to Siri, Alexa, or Google Assistant, and asked to determine an action to take based on information from the system. Assignment of tasks and systems were randomized across participants, and participants queried the conversational assistants in their own words, making as many attempts as needed until they either reported an action to take or gave up. Participant-reported actions for each medical task were rated for patient harm using an Agency for Healthcare Research and Quality harm scale.ResultsFifty-four subjects completed the study with a mean age of 42 years (SD 18). Twenty-nine (54%) were female, 31 (57%) Caucasian, and 26 (50%) were college educated. Only 8 (15%) reported using a conversational assistant regularly, while 22 (41%) had never used one, and 24 (44%) had tried one “a few times.“ Forty-four (82%) used computers regularly. Subjects were only able to complete 168 (43%) of their 394 tasks. Of these, 49 (29%) reported actions that could have resulted in some degree of patient harm, including 27 (16%) that could have resulted in death.ConclusionsReliance on conversational assistants for actionable medical information represents a safety risk for patients and consumers. Patients should be cautioned to not use these technologies for answers to medical questions they intend to act on without further consultation from a health care provider.

Highlights

  • BackgroundConversational assistants, such as Siri (Apple), Alexa (Amazon), and Google Assistant, are ubiquitous

  • We reviewed work on the development of medical error taxonomies [28-30], but found that they did not capture the nuances of the errors we observed— ones involving sequential interactions between subjects and conversational assistants or errors in which both the subject and the conversational assistant were partially to blame—so we developed taxonomy based on the cases we observed (Table 3)

  • Participants made a median of 5 attempts per task with an interquartile range (IQR) of 3.0-7.0, each lasting a median of 11.0 seconds (IQR 8.0-17.0)

Read more

Summary

Introduction

BackgroundConversational assistants, such as Siri (Apple), Alexa (Amazon), and Google Assistant, are ubiquitous. Many users believe that voice search using conversational assistants is more accurate than using web search [1] These interfaces are increasingly being used as health information portals for consumers, with Amazon currently listing 78 “medical skill” add-ons for the Alexa assistant alone [2]. The use of conversational assistants for medical information, such as medication recommendations or emergency procedures, may represent safety risks if these systems return incomplete or incorrect information and users act on it without further consultation from health care professionals. Conversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. Patients should be cautioned to not use these technologies for answers to medical questions they intend to act on without further consultation from a health care provider

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.