Abstract
Single-channel speech enhancement has made great progress with the development of deep learning. Recently, some researchers predict the real and imaginary parts of the output respectively by deep complex convolutional networks and achieve state-of-art performance. Based on this, in this paper, we design a new network structure, called Deep Complex Convolution Transformer Network (DCCTN), which is dedicated to solving the problem of single-channel speech enhancement under the conditions of far-field and extremely low signal-to-noise ratio (SNR). First, based on the Deep Complex Convolution Recurrent Network (DCCRN), a two-stage transformer masking module is introduced to replace the recurrent network structure and better focus on the long-term correlation of speech. Secondly, the deep complex transformer structure and gaussian weight matrix are introduced to make the model more suitable for far-field and very low SNR scenarios. In the experiment, we mixed real far-field clean speech and noise to make training and test data sets. The experimental results show that in the target scene, DCCTN proposed has significantly improved speech-enhancement performance, compared to most of the latest methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.