Abstract

Gradient-type algorithms commonly employ a scalar step-size, i.e., each entry of the regression vector is multiplied by the same value before updating the coefficients. More flexibility, however, is obtained when this step-size is of matrix size. It allows not only to individually scaling the entries of the regression vector but rotations and decorrelations are possible as well due to the choice of the matrix. A well-known example for the use of a fixed step-size matrix is the Newton–LMS algorithm. For such a fixed step-size matrix, conditions are well known under which a gradient-type algorithm converges. This article, however, presents robustness and convergence conditions for a least-mean-square (LMS) algorithm with time-variant matrix step-size. On the example of a channel estimator used in a cellular hand-phone, it is shown that the choice of a particular step-size matrix leads to considerable improvement over the fixed step-size case.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.