Abstract

This article verifies the reliability of automatic speaker verification (ASV) systems on new synthesis methods based on deep neural networks. ASV systems are widely used and applied regarding secure and effective biometric authentication. On the other hand, the rapid deployment of ASV systems contributes to the increased attention of attackers with newer and more sophisticated spoofing methods. Until recently, speech synthesis of the reference speaker did not seriously compromise the latest ASV systems. This situation is changing with the deployment of deep neural networks into the synthesis process. Projects including WaveNet, Deep Voice, Voice Loop, and many others generate very natural and high-quality speech that may clone voice identity. We are slowly approaching an era where we will not be able to recognize a genuine voice from a synthesized one. Therefore, it is necessary to define the robustness of current ASV systems to new methods of voice cloning. In this article, well-known SVM and GMM as well as new CNN-based ASVs are applied and subjected to synthesized speech from Tacotron 2 with the WaveNet TTS system. The results of this work confirm our concerns regarding the reliability of ASV systems against synthesized speech.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call