Abstract
This paper presents a time–frequency masking based online multi-channel speech enhancement approach that uses a convolutional recurrent neural network to estimate the mask. The magnitude and phase components of the short-time Fourier transform coefficients for multiple time frames are provided as an input such that the network is able to discriminate between the directional speech and the noise components based on the spatial characteristics of the individual signals as well as their spectro-temporal structure. The estimation of two different masks, namely, ideal ratio mask (IRM) and ideal binary mask (IBM), along with two different approaches for incorporating the mask to obtain the desired signal are discussed. In the first approach, the mask is directly applied as a real valued gain to a reference microphone signal, whereas in the second approach, the masks are used as an activity indicator for the recursive update of power spectral density (PSD) matrices to be used within a beamformer. The performance of the proposed system with the two different estimated masks utilized within the two different enhancement approaches is evaluated with both simulated as well as measured room impulse responses, where it is shown that the IBM is better suited as an indicator for the PSD updates while direct application of IRM as a real valued gain leads to a better improvement in terms of short term objective intelligibility. Analysis of the performance of the proposed system also demonstrates the robustness of the system to different angular positions of the speech source.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal of Selected Topics in Signal Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.