Abstract

Random Fourier mapping (RFM) in kernel adaptive filters (KAFs) provides an efficient method to curb the linear growth of the dictionary by projecting the original input data into a finite-dimensional space. The commonly used measure in RFM-based KAFs is the minimum mean square error (MMSE), which causes performance deterioration in the presence of non-Gaussian noises. To address this issue, the minimum Cauchy loss (MCL) criterion has been successfully applied for combating non-Gaussian noises in KAFs. However, these KAFs using the well-known stochastic gradient descent (SGD) optimization method may suffer from slow convergence rate and low filtering accuracy. To this end, we propose a novel robust random Fourier features Cauchy conjugate gradient (RFFCCG) algorithm using the conjugate gradient (CG) optimization method in this paper. The proposed RFFCCG algorithm with low complexity can achieve better filtering performance than the KAFs with sparsification, such as the kernel recursive maximum correntropy algorithm with novelty criterion (KRMC-NC), in stationary and non-stationary environments. Monte Carlo simulations conducted in the time-series prediction and nonlinear system identification confirm the superiorities of the proposed algorithm.

Highlights

  • Many applications in the real world, such as system identification, regression, and online kernel learning (OKL) [1], require complex nonlinear models

  • We compare the computational complexities of random Fourier features Cauchy conjugate gradient (RFFCCG) with the kernel least mean square (KLMS) algorithm [3], kernel recursive least squares (KRLS) algorithm [5], kernel conjugate gradient (KCG) algorithm [30], and kernel recursive maximum correntropy (KRMC) algorithm [31] in Table 1, where k is the number of iterations and n and m are the dimensions of original data space and random Fourier features space (RFFS), respectively

  • The bandwidth of Gaussian kernels was set to 1 for all algorithms; the step size was η = 0.1 in random Fourier features kernel least mean square (RFFKLMS) and random Fourier features maximum correntropy (RFFMC); the threshold ε = 0.05 was chosen for QKRLS; the distance threshold and the error threshold were set as δ1 = 0.1 and δ2 = 0.1, respectively, and the regularization parameter λ = 0.1 for the KRMC-NC; for random Fourier features conjugate gradient (RFFCG) and RFFCCG, the forgetting factor was set to β = 0.999; γ = 0.3 was chosen for RFFCCG; the dimension of RFFS

Read more

Summary

Introduction

Many applications in the real world, such as system identification, regression, and online kernel learning (OKL) [1], require complex nonlinear models. KAFs based on the second-order statistical measures are sensitive to non-Gaussian noises including the sub-Gaussian and super-Gaussian noises, which means that their performance may be seriously degraded if the training data are contaminated by outliers To handle this issue, robust statistical measures have gained more attention, among which the lower-order error measure [17] and the higher-lower error measure [18] are two typical examples. To reduce the computational complexity, we apply the RFM in the MCL-based KAF to address the problem of linear growth and improve the robustness. The CG optimization method is used to improve the filtering accuracy and convergence rate, developing a novel robust random Fourier features Cauchy conjugate gradient (RFFCCG) algorithm. 2) By applying the CG method, RFFCCG with low computational and space complexities provides good filtering accuracy against non-Gaussian noises.

Minimum Cauchy Loss Criterion
Conjugate Gradient Algorithm
Online Conjugate Gradient Algorithm
Proposed Algorithm
Random Fourier Mapping
RFFCCG Algorithm
Complexity
Simulation
Mackey–Glass Time Series
Nonlinear System Identification
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call