Abstract

A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track. Visual cues are particularly useful when a pre-enrolled speech is not available. In this work, we don’t rely on the target speaker’s pre-enrolled speech, but rather use the target speaker’s face track as the speaker cue, that is referred to as the auxiliary reference, to form an attractor towards the target speaker. We advocate that the temporal synchronization between the speech and its accompanying lip movements is a direct and dominant audio-visual cue. Therefore, we propose a self-supervised pre-training strategy, to exploit the speech-lip synchronization cue for target speaker extraction, which allows us to leverage abundant unlabeled in-domain data. We transfer the knowledge from the pre-trained model to the attractor encoder of the speaker extraction network. We show that the proposed speaker extraction network outperforms various competitive baselines in terms of signal quality, perceptual quality, and intelligibility, achieving state-of-the-art performance.

Highlights

  • H UMANS have a remarkable ability to focus attention on a particular speech signal in the presence of multiple noise sources and competing background speakers [1]

  • To emulate human visual top-down attention during listening in the cocktail party scenario, we explore the speech-lip synchronization in a multi-talker setting with a pre-trained network, named the speech-lip synchronization (SLSyn) network

  • We propose a self-supervised training strategy for the SLSyn network such that the learning of speech-lip synchronization could leverage abundant unlabeled training data that are in the same domain as the speaker extraction task

Read more

Summary

Introduction

H UMANS have a remarkable ability to focus attention on a particular speech signal in the presence of multiple noise sources and competing background speakers [1]. The speaker extraction algorithm mimics human selective attention to extract only the target speaker’s speech in such an adverse acoustic environment, which is referred to as the cocktail party problem [2]. Chenglin Xu is with Kuaishou Technology, 518063 Shenzhen, China. Haizhou Li is with The Chinese University of Hong Kong, Shenzhen, China, and the University of Bremen, 28359 Bremen, Germany

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call