Abstract
Echo cancelers typically employ control mechanisms to prevent adaptive filter updates during double-talk events. By contrast, this paper exploits the information contained in time-varying second order statistics of nonstationary signals to update adaptive filters and learn echo path responses during double-talk. First, a framework is presented for describing mixing and blind separation of independent groups of signals. Then several echo cancellation problems are cast in this framework, including the problem of simultaneous acoustic and line echo cancellation as encountered in speaker phones. A maximum-likelihood approach is taken to estimate both the unknown signal statistics as well as echo canceling filters. When applied to speech signals, the techniques developed in this paper typically achieved between 30 and 40 dB of echo return loss enhancement (ERLE) during continuous double-talking.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Audio, Speech, and Language Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.