Abstract

In this investigation, a deep neural network (DNN) based speech extraction method is proposed to enhance a speech signal propagating from the desired direction. The proposed method integrates knowledge based on a sound propagation model and the time-varying characteristics of a speech source, into a DNN-based separation framework. This approach outputs a separated speech source using time-varying spatial filtering, which achieves superior speech extraction performance compared with time-invariant spatial filtering. Given that the gradient of all modules can be calculated, back-propagation can be performed to maximize the speech quality of the output signal in an end-to-end manner. Guided information is also modeled based on the sound propagation model, which facilitates disentangled representations of the target speech source and noise signals. The experimental results demonstrate that the proposed method can extract the target speech source more accurately than conventional DNN-based speech source separation and conventional speech extraction using time-invariant spatial filtering.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.