Abstract

The Normalized Least Mean Square (NLMS) algorithm belongs to gradient class of adaptive algorithm which provides the solution to the slow convergence of the Least Mean Square (LMS) algorithm. Motivated by the recently explored q-gradient in the field of adaptive filtering, we developed here a q-gradient based NLMS algorithm. More specifically, we replace the conventional gradient by the q-gradient to derive the NLMS weight update recursion. We also provide a detailed mean-square-error (MSE) analysis of the proposed algorithm for both the transient and the steady-state scenarios. Consequently, we derive the closed form expressions for the MSE learning curve and the steady-state excess MSE (EMSE). Simulation results are provided to show the superiority of the proposed algorithm over the conventional NLMS algorithm and to validate the theoretical analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.