Abstract
The radical advances in telecommunications and computer science have enabled a myriad of applications and novel seamless interactions with computing interfaces. Voice Assistants (VAs) have become the norm for smartphones, and millions of VAs incorporated in smart devices are used to control these devices in the smart home context. Previous research has shown that they are prone to attacks, leading vendors to implement countermeasures. One of these measures is to allow only a specific individual, the device's owner, to perform potentially dangerous tasks that may disclose personal information, involve monetary transactions, etc. To understand the extent to which VAs provide the necessary protection to their users, we experimented with two of the most widely used VAs, which the participants trained. We then utilised voice synthesis, using samples provided by participants, to synthesise commands that were used to trigger the corresponding VA and perform a dangerous task. Our extensive results showed that more than 30% of our audio synthesis attacks were successful and at least one successful attack for more than half of the participants. Moreover, they illustrate statistically significant variation among vendors and, in one case, even gender bias. The outcomes are rather alarming and require the deployment of further countermeasures to prevent exploitation, as the number of VAs in use is currently comparable to the world population.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.