Abstract

In smart home, voice control becomes a main interface between users and smart devices. To make voice control more secure, speaker verification systems have been researched to apply human voice as biometrics to accurately identify a legitimate user and avoid illegal access. In recent studies, however, it has been shown that speaker verification systems are particularly vulnerable to adversarial attacks. In this work, we attempt to design and implement a defense system that is simple, light-weight, and effective against adversarial attacks for speaker verification. Specifically, we study two opposite operations to preprocess input audios in speaker verification systems against adversarial attacks: denoising that attempts to remove or reduce perturbations and noise-adding that adds small Gaussian noises to an input audio. We show through experiments that both methods can significantly degrade the performance of a state-of-the-art adversarial attack. Specifically, it is shown that denoising and noise-adding can reduce the targeted attack success rate of the attack from 100% to only 56% and 5.2%, respectively. Moreover, noise-adding can slow down the attack 25 times in speed and only has a minor effect on the normal operations of a speaker verification system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call