Abstract

Voice assistants (VAs) such as Siri and Google Now have become an increasingly popular human-machine interaction method and have made various systems voice controllable. Prior work on attacking voice assistants shows that the hidden voice commands that are incomprehensible to people can control the VAs. Hidden voice commands, though ‘hidden’, are nonetheless audible. In this work, we design a completely inaudible attack, DolphinAttack , that modulates voice commands on ultrasonic carriers to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low-frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the voice assistants. We validate DolphinAttack on popular voice assistants, including Siri, Google Now, S Voice, HiVoice, Cortana, Alexa, etc. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to turn on the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions. We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM), and suggest to re-design voice assistants to be resilient to inaudible voice command attacks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.