Abstract
The paper presents new convergence results for two adaptive filters: the RLS and LMS algorithms. Convergence of the exact RLS algorithm is studied when the forgetting factor \lambda is constant, which enables the adaptive filter to track time variations of the optimal filter. It is shown that, in the steady state, the squared deviation of the adaptive filter from the optimal one admits, with probability 1- \epsilon ( \epsilon arbitrarily small), an upper bound that is proportional to the (infinitesimal) quantity \mu = 1 - {\lambda} . This result agrees with the algorithm's practical behavior. The bound increases with the correlation degree of the filter inputs. This paper also provides an almost sure convergence result concerned with the LMS algorithm with decreasing step-size (infinite memory), used only when the optimal filter is asymptotically time-invariant, although the input statistics may be time-varying.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have