Abstract

While the least mean square (LMS) algorithm has been widely explored for some specific statistics of the driving process, an understanding of its behavior under general statistics has not been fully achieved. In this paper, the mean square convergence of the LMS algorithm is investigated for the large class of linearly filtered random driving processes. In particular, the paper contains the following contributions: (i) The parameter error vector covariance matrix can be decomposed into two parts, a first part that exists in the modal space of the driving process of the LMS filter and a second part, existing in its orthogonal complement space, which does not contribute to the performance measures (misadjustment, mismatch) of the algorithm. (ii) The impact of additive noise is shown to contribute only to the modal space of the driving process independently from the noise statistic and thus defines the steady state of the filter. (iii) While the previous results have been derived with some approximation, an exact solution for very long filters is presented based on a matrix equivalence property, resulting in a new conservative stability bound that is more relaxed than previous ones. (iv) In particular, it will be shown that the joint fourth-order moment of the decorrelated driving process is a more relevant parameter for the step-size bound rather than, as is often believed, the second-order moment. (v) We furthermore introduce a new correction factor accounting for the influence of the filter length as well as the driving process statistic, making our approach quite suitable even for short filters. (vi) All statements are validated by Monte Carlo simulations, demonstrating the strength of this novel approach to independently assess the influence of filter length, as well as correlation and probability density function of the driving process.

Highlights

  • The well-known least mean square (LMS) algorithm [1] is the most successful of all adaptive algorithms

  • 6 Conclusions In this contribution, a stochastic analysis of second-order moments in terms of the parameter error covariance matrix has been shown for the LMS algorithm under the large class of linearly filtered random driving processes

  • While results were previously only known for a small number of statistics, this contribution deals with the large class of linearly filtered white processes with arbitrary statistics

Read more

Summary

Introduction

The well-known least mean square (LMS) algorithm [1] is the most successful of all adaptive algorithms. Desired output of unknown system Impulse response of unknown system Estimate of w Regression vector Elements of the regression vector Additive noise Autocorrelation matrix of uk Diagonal matrix = QRuuQT Covariance matrix of w − wk White generating process Second-order moment of xk Joint fourth-order moment of xk Step-size upper right Toeplitz matrix Identity matrix of dimension M Identity matrix of dimension P Vector with ones as entries. In this case, the output vector uk = Axk is of dimension IRM×1.

Modal space of the LMS algorithm
Learning and steady-state behavior
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.