Abstract
Convolutional recurrent neural networks (CRNs) using convolutional encoder-decoder (CED) structures have shown promising performance for single-channel speech enhancement. These CRNs handle temporal modeling through integrating long short-term memory (LSTM) layers in between convolutional encoder and decoder. However, in such a CRN, the organization of internal representations in feature maps and the focus on local structure of the convolutional mappings has to be discarded for fully-connected LSTM processing. Furthermore, CRNs can be quite restricted concerning the feature space dimension at the input of the LSTM, which, through its fully-connected nature, requires a large amount of trainable parameters. As first novelty, we propose to replace the fully-connected LSTM by a convolutional LSTM (ConvLSTM) and call the resulting network a fully convolutional recurrent network (FCRN). Secondly, since the ConvLSTM retains the structured organization of its input feature maps, we can show that this helps to internally represent the harmonic structure of speech, allowing us to handle high-dimensional input features using less trainable parameters than an LSTM. The proposed FCRN clearly outperforms CRN reference models with similar amounts of trainable parameters in terms of PESQ, STOI, and segmental ∆SNR.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.