Abstract
Traditional filtering theory is always based on optimization of the expected value of a suitably chosen function of error, such as the minimum mean-square error (MMSE) criterion, the minimum error entropy (MEE) criterion, and so on. None of those criteria could capture all the probabilistic information about the error distribution. In this work, we propose a novel approach to shape the probability density function (PDF) of the errors in adaptive filtering. As the PDF contains all the probabilistic information, the proposed approach can be used to obtain the desired variance or entropy, and is expected to be useful in the complex signal processing and learning systems. In our method, the information divergence between the actual errors and the desired errors is chosen as the cost function, which is estimated by kernel approach. Some important properties of the estimated divergence are presented. Also, for the finite impulse response (FIR) filter, a stochastic gradient algorithm is derived. Finally, simulation examples illustrate the effectiveness of this algorithm in adaptive system training.
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have