Abstract

Examines a family of adaptive filter algorithms of the form W/sub k+1/=W/sub k/+/spl mu/f(d/sub k/-W/sub k//sup t/X/sub k/)X/sub k/ in which f(/spl middot/) is a memoryless odd-symmetric nonlinearity acting upon the error. Such algorithms are a generalization of the least-mean-square (LMS) adaptive filtering algorithm for even-symmetric error criteria. For this algorithm family, the authors derive general expressions for the mean and mean-square convergence of the filter coefficients For both arbitrary stochastic input data and Gaussian input data. They then provide methods for optimizing the nonlinearity to minimize the algorithm misadjustment for a given convergence rate. Using the calculus of variations, it is shown that the optimum nonlinearity to minimize misadjustment near convergence under slow adaptation conditions is independent of the statistics of the input data and can be expressed as -p'(x)/p(x), where p(x) is the probability density function of the uncorrelated plant noise. For faster adaptation under the white Gaussian input and noise assumptions, the nonlinearity is shown to be x/{1+/spl mu//spl lambda/x/sup 2///spl sigma//sub k//sup 2/}, where /spl lambda/ is the input signal power and /spl sigma//sub k//sup 2/ is the conditional error power. Thus, the optimum stochastic gradient error criterion for Gaussian noise is not mean-square. It is shown that the equations governing the convergence of the nonlinear algorithm are exactly those which describe the behavior of the optimum scalar data nonlinear adaptive algorithm for white Gaussian input. Simulations verify the results for a host of noise interferences and indicate the improvement using non-mean-square error criteria. >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call