Abstract

PurposeDigital voice assistants use wake word engines (WWEs) to monitor surrounding audio for detection of the voice assistant's name. There are two failed conditions for a WWE, false negative and false positive. Wake word false positives threaten a loss of personal privacy because, upon activation, the digital assistant records audio to the voice cloud service for processing.Design/methodology/approachThis observational study attempted to identify which Amazon Alexa wake word and Amazon Echo smart speaker resulted in the fewest number of human voice false positives. During an eight-week period, false-positive data were collected from four different Amazon Echo smart speakers located in a small apartment with three female roommates.FindingsResults from this study suggest the number of human voice false positives are related to wake word selection and Amazon Echo hardware. Results from this observational study determined that the wake word Alexa resulted in the fewest number of false positives.Originality/valueThis study suggests Amazon Alexa users can better protect their privacy by selecting Alexa as their wake word and selecting smart speakers with the highest number of microphones in the far-field array with 360-degree geometry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call