Abstract

Real-world sounds are often interrupted by various kinds of noise. The target signal of the mixture sounds is often degraded or lost. While the human auditory system can extract the target signal from the mixture and restore the degraded or lost parts simultaneously, current computational models often simplify the complex scenarios, which leads to two individual tasks, audio inpainting and speech enhancement. In this work, we take a pioneering step towards modeling auditory restoration, that is to restore the target speech signal, in which there are missing parts in the target signal and the target signal is interfered by background noise. Different from the speech enhancement task, we attempt to fill in the missing gaps with the existence of background noise. Different from the auditory inpainting task, there is some noise in our input signal and the positions of the missing gaps are unknown. In other words, we attempt to reduce interference and restore missing gaps simultaneously. We propose Hourglass-shaped Convolutional Recurrent Network (HCRN) trained with Spectro- Temporal loss to restore the target signal from the incomplete noisy mixture. Moreover, instead of restoring non-human sounds, we focus on speech restoration, which poses more challenges on reconstruction. Both the quantitative and qualitative performance show that our proposed method can suppress the background noise, identify and restore the missing gaps of the salient signal with the unreliable context information. Our code is available in https://github.com/aispeech-lab/HC $\text{RN}$ .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call