Abstract

The current study was a replication and comparison of our previous research which examined the comprehension accuracy of popular intelligent virtual assistants, including Amazon Alexa, Google Assistant, and Apple Siri for recognizing the generic and brand names of the top 50 most dispensed medications in the United States. Using the exact same voice recordings from 2019, audio clips of 46 participants were played back to each device in 2021. Google Assistant achieved the highest comprehension accuracy for both brand medication names (86.0%) and generic medication names (84.3%), followed by Apple Siri (brand names = 78.4%, generic names = 75.0%), and the lowest accuracy by Amazon Alexa (brand names 64.2%, generic names = 66.7%). These findings represent the same trend of results as our previous research, but reveal significant increases of ~10–24% in performance for Amazon Alexa and Apple Siri over the past 2 years. This indicates that the artificial intelligence software algorithms have improved to better recognize the speech characteristics of complex medication names, which has important implications for telemedicine and digital healthcare services.

Highlights

  • IntroductionIntelligent virtual (or voice) assistants (IVA), such as Amazon Alexa (hereinafter referred to as Alexa), Google Assistant, and Apple Siri (hereinafter referred to as Siri), are popular artificial intelligence (AI) software programs designed to simulate human conversation and perform web-based searches and other commands [1]

  • Intelligent virtual assistants (IVA), such as Amazon Alexa, Google Assistant, and Apple Siri, are popular artificial intelligence (AI) software programs designed to simulate human conversation and perform web-based searches and other commands [1]

  • A main effect of Intelligent virtual (or voice) assistants (IVA) was found [F(2,88) = 336.48, p < 0.0001, ηp2 = 0.88], revealing that Google Assistant achieved the highest accuracy (M = 85.6%, SD = 9.0), which was significantly greater than Siri (M = 76.7%, SD = 11.0), which was in turn, significantly greater than Alexa (M = 65.4%, SD = 11.2)

Read more

Summary

Introduction

Intelligent virtual (or voice) assistants (IVA), such as Amazon Alexa (hereinafter referred to as Alexa), Google Assistant, and Apple Siri (hereinafter referred to as Siri), are popular artificial intelligence (AI) software programs designed to simulate human conversation and perform web-based searches and other commands [1]. Previous research has investigated the use of these devices to gather health information and give medically related suggestions for mental and physical health inquiries [2,3,4,5,6]. These findings have revealed that IVAs generally provide poor, inconsistent, and potentially harmful advice to users. A major limitation of the IVAs’ ability to provide appropriate health information is their inaccurate comprehension of complex medical language syntax [7].

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.