Abstract

Voice assistants (VAs) such as Siri and Google Now have become an increasingly popular human-machine interaction method and have made various systems voice controllable. Prior work on attacking voice assistants shows that the hidden voice commands that are incomprehensible to people can control the VAs. Hidden voice commands, though ‘hidden’, are nonetheless audible. In this work, we design a completely inaudible attack, DolphinAttack , that modulates voice commands on ultrasonic carriers to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low-frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the voice assistants. We validate DolphinAttack on popular voice assistants, including Siri, Google Now, S Voice, HiVoice, Cortana, Alexa, etc. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to turn on the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions. We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM), and suggest to re-design voice assistants to be resilient to inaudible voice command attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call