Abstract
Since both the least mean-square (LMS) and least mean-fourth (LMF) algorithms suffer individually from the problem of eigenvalue spread, so will the mixed-norm LMS-LMF algorithm. Therefore, to overcome this problem for the mixed-norm LMS-LMF, we are adopting here the same technique of normalization (normalizing with the power of the input) that was successfully used with the LMS and LMF separately. Consequently a new normalized variable-parameter mixed-norm (VPNMN) adaptive algorithm is proposed in this study. This algorithm is derived by exploiting a time-varying mixing parameter in the traditional mixed-norm LMS-LMF weight update equation. The time-varying mixing parameter is adjusted according to a well-known technique used in the adaptation of the step-size parameter of the LMS algorithm. In order to study the theoretical aspects of the proposed VPNMN adaptive algorithm, our study also addresses its convergence analysis, and assesses its performance using the concept of energy conservation. Extensive simulation results corroborate our theoretical findings and show that a substantial improvement, in both convergence time and steady-state error, can be obtained with the proposed algorithm. Finally, the VPNMN algorithm proved its usefulness in a noise cancellation application where it showed its superiority over the normalized least-mean square algorithm.
Highlights
Due to its simplicity, the least mean-square (LMS) [1,2] algorithm is the most widely used algorithm for adaptive filters in many applications
Excellent agreement between theory and simulation results is obtained; a consistency in performance is obtained by the proposed variable-parameter normalized mixed-norm (VPNMN) algorithm
6.4 Noise cancelation using VPNMN algorithm In this example, we study the performance of the VPNMN algorithm for the application of noise cancelation
Summary
The least mean-square (LMS) [1,2] algorithm is the most widely used algorithm for adaptive filters in many applications. The least mean-fourth (LMF) [3] algorithm was proposed later as a special case of the more general family of steepest descent algorithms [4] with 2k error norms, k being a positive integer For both of these algorithms, the convergence behavior depends on the condition number, i.e., on the ratio of the maximum to the minimum eigenvalues of the input signal autocorrelation matrix, R = E[xnxTn ] where xn is the input signal. As mentioned earlier and because of its reliance on the LMS and the LMF, the algorithm defined by (9) will be affected by the eigenvalue spread of the autocorrelation matrix of the input signal To overcome this dependency, a VPNMN adaptive algorithm is introduced and its weight update recursion is given by the following expression: wn+1 = wn + μ[αnen + 2(1 − αn)e3n].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.