Abstract

When dealing with complex nonlinear signals in intelligent system, by defining the inner product of two data vectors in the feature space, the kernel function can reflect the nonlinear mapping relationship between reproducing kernel Hilbert space (RKHS) and original data space. Therefore, by implementing classical linear adaptive filtering in RKHS space, the filtering operation can be expressed as a special relation of inner product with kernel function, which is referred to as “kernel trick.” As long as these algorithms can be expressed in the form of inner product, not only the convex least squares problem can be solved iteratively, but also the nonlinear adaptive filtering algorithms can be obtained, which have both general approximation characteristics and convexity. Therefore, the combination of kernel method and adaptive filtering algorithm is realized. On the other hand, since the Gram matrix is used, the dimension of kernel adaptive algorithm is determined by the number of data samples. When the number of observation sample point is increasing, the size of the state space increases exponentially with the growth of the dimension. Thereby, the kernel adaptive algorithm should solve the problem of online sparsification to avoid “curse of dimensionality.” As part II, based on online sparse kernel learning and the classical adaptive filtering algorithm, the kernel adaptive algorithms and online sparse algorithms are investigated in this paper. The main works of this paper have two aspects. First, combined with classical adaptive algorithm and kernel feature mapping, this paper investigates the basic concept of kernel adaptive algorithm and the realization mechanism of four kernel adaptive algorithm intensively: 1) kernel least mean squares; 2) kernel recursive least squares; 3) kernel affine projection algorithm; and 4) kernel principal component analysis. Second, in order to reduce the computational complexity, this paper studies some online sparse algorithms which include novel criterion, approximate linear dependency, sliding window, coherence criterion, surprise criterion, and so on. Finally, this paper summarizes the essence of existing conclusions of above algorithms and perspectives the future research direction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call