Abstract

In adaptive filtering practice, the Least Mean Squares (LMS) algorithm is widely used due to its computational simplicity and ease of implementation. However, since its convergence rate depends on the eigenvalue ratio of the autocorrelation matrix of the input noise signal, an LMS adaptive filter converges rather slowly when trained with colored noise as the input signal. However, with the continuing increase of computational power that is currently available in modern integrated signal processors (simply called “DSP chips” throughout the following discussion), adaptive filter designers should be free in the future to use more computationally intensive adaptive filtering algorithms that can perform better than the simple LMS algorithm in real time applications. The objective of this chapter is to explore several of these more computationally intensive, but potentially better performing, adaptive filtering algorithms. In particular, we will consider four classes of algorithms that have received attention by the research community over the last few years: 1) data-reusing LMS algorithms, 2) orthogonalization by pseudo-random (PR) modulation, 3) Gauss-Newton optimization for FIR filters, and 4) block adaptive IIR filters using preconditioned conjugate gradient techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.