Abstract

We propose a general framework for adaptation control using deep neural networks (NNs) and apply it to acoustic echo cancellation (AEC). First, the optimal step-size that controls the adaptation is derived offline by solving a constrained nonlinear optimization problem that minimizes the adaptive filter misadjustment. Then, a deep NN is trained to learn the relation between the input data and the optimal step-size. In real-time, the NN infers the optimal step-size from streaming data and feeds it to an NLMS filter for AEC. This data-driven method makes no assumptions on the acoustic setup and is entirely non-parametric. Experiments with 100 h of real and synthetic data show that the proposed method outperforms the competition in echo cancellation, speech distortion, and convergence during both single-talk and double-talk.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call