Abstract

This article addresses the robust estimation of the output layer linear parameters in a radial basis function network (RBFN). A prominent method used to estimate the output layer parameters in an RBFN with the predetermined hidden layer parameters is the least-squares estimation, which is the maximum-likelihood (ML) solution in the specific case of the Gaussian noise. We highlight the connection between the ML estimation and minimizing the Kullback-Leibler (KL) divergence between the actual noise distribution and the assumed Gaussian noise. Based on this connection, a method is proposed using a variant of a generalized KL divergence, which is known to be more robust to outliers in the pattern recognition and machine-learning problems. The proposed approach produces a surrogate-likelihood function, which is robust in the sense that it is adaptive to a broader class of noise distributions. Several signal processing experiments are conducted using artificially generated and real-world data. It is shown that in all cases, the proposed adaptive learning algorithm outperforms the standard approaches in terms of mean-squared error (MSE). Using the relative increase in the MSE for different noise conditions, we compare the robustness of our proposed algorithm with the existing methods for robust RBFN training and show that our method results in overall improvement in terms of absolute MSE values and consistency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call