Abstract

Various time-frequency (T-F) masks are being applied to sound source localization tasks. Moreover, deep learning has dramatically advanced T-F mask estimation. However, existing masks are usually designed for speech separation tasks and are suitable only for single-channel signals. A novel complex-valued T-F mask is proposed that reserves the head-related transfer function (HRTF), customized for binaural sound source localization. In addition, because the convolutional neural network that is exploited to estimate the proposed mask takes binaural spectral information as the input and output, accurate binaural cues can be preserved. Compared with conventional T-F masks that emphasize single speech source–dominated T-F units, HRTF-reserved masks eliminate the speech component while keeping the direct propagation path. Thus, the estimated HRTF is capable of extracting more reliable localization features for the final direction of arrival estimation. Hence, binaural sound source localization guided by the proposed T-F mask is robust under noisy and reverberant acoustic environments. The experimental results demonstrate that the new T-F mask is superior to conventional T-F masks and lead to the better performance of sound source localization in adverse environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.