Abstract

Learning robust representations of speaker identity is a key challenge in speaker verification, as it results in good generalization for many real-world speaker verification scenarios with domain or intra-speaker variations. In this study, we aim to improve the well-established ECAPA-TDNN framework to enhance its domain robustness for low-resource cross-domain speaker verification tasks. Specifically, a novel dual-model self-learning approach is first proposed to produce robust speaker identity embeddings, where the ECAPA-TDNN is extended into a dual-model structure and then trained and regularized using self-supervised learning between different intermediate acoustic representations; Then, we enhance the dual-model by combining self-supervised loss and supervised loss in a time-dependent manner, thus enhancing the model’s overall generalization capabilities. Furthermore, to better utilize the complementary information in the dual-model’s outputs, we explore various methods for similarity computation and score fusion. Our experiments, conducted on the publicly available VoxCeleb2 and VoxMovies datasets, have demonstrated that our proposed dual-model regularization and fusion methods outperformed the strong baseline by a relative 9.07%–11.6% EER reduction across various in-domain and cross-domain evaluation sets. Importantly, our approach exhibits effectiveness in both supervised and unsupervised scenarios for low-resource cross-domain speaker verification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call