Abstract

Speech separation is a hot topic in multi-speaker speech recognition. The long-term autocorrelation of speech signal sequences is an essential task for speech separation. The keys are effective intra-autocorrelation learning for the speaker’s speech, modelling the local (intra-blocks) and global (intra- and inter- blocks) dependence features of the speech sequence, with the real-time separation of as few parameters as possible. In this paper, the local and global dependence features of speech sequence information are extracted by utilizing different transformer structures. A forward adaptive module of channel and space autocorrelation is proposed to give the separated model good channel adaptability (channel adaptive modeling) and space adaptability (space adaptive modeling). In addition, at the back end of the separation model, a speaker enhancement module is considered to further enhance or suppress the speech of different speakers by taking advantage of the mutual suppression characteristics of each source signal. Experiments show that the scale-invariant signal-to-noise ratio improvement (SI-SNRi) of the proposed separation network on the public corpus WSJ0-2mix achieves better separation performance compared with the baseline models. The proposed method can provide a solution for speech separation and speech recognition in multi-speaker scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call