Abstract

In this study, we conduct an empirical analysis of interpretation errors made by Amazon Alexa, the speech-recognition engine that powers the Amazon Echo family of devices. We show how common misinterpretations made by Alexa can be used to build a new class of attacks, called skill squatting attacks, and discuss its security implications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call