Abstract

In this paper, we investigate the convergence performance of a sparsified kernel least mean square (KLMS) algorithm in which the input is added into the dictionary only when the prediction error in amplitude is larger than a preset threshold. Under certain conditions, we derive an approximate value of the steady-state excess mean square error (EMSE). Simulation results confirm the theoretical predictions and provide some interesting findings, showing that the sparsification can not only be used to constrain the network size (hence reduce the computational burden) but also be used to improve the steady-state performance in some cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call