Abstract
In the real world, speaker recognition systems usually suffer from serious performance degradation due to the domain mismatch between training and test conditions. To alleviate the harmful effect of domain shift, unsupervised domain adaptation methods are introduced to learn domain-invariant speaker representations, which focus on addressing the single-source-to-single-target domain adaptation issue. However, labeled speaker data are usually collected from multiple sources, such as different languages, genres and devices. The single-domain adaptation methods can not deal with the complex multiple-domain mismatch problem. To address this issue, we propose a multiple-domain adaptation framework named CentriForce to extract domain-invariant speaker representations for speaker recognition. Different from previous methods, CentriForce learns multiple domain-related speaker representation spaces. To mitigate the multiple-domain mismatch, CentriForce reduces the Wasserstein distance between each pair of source and target domains in their domain-related representation space and meanwhile uses the target domain as an anchor point to draw all source domains closer to each other. In our experiments, CentriForce achieves the best performance on most of the 16 challenging adaptation tasks, compared with other competing adaptation methods. Ablation study and representation visualization further demonstrate its effectiveness for learning the domain-invariant speaker embedding.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.