Abstract

In most optimization problems, loss functions assign the minimum cost to the excellent/ideal sample with a minimum error magnitude. However, this sample may have been mistakenly deemed excellent due to noise. This can lead to an anomaly known as the “optimizer's curse”. Our goal is to manage this anomaly in single-task diffusion networks by giving almost the same importance to a set of good samples instead of just the excellent/ideal one. For this purpose, we introduce a novel loss function and propose a “robust diffusion smooth σ-insensitive correntropy” algorithm for distributed estimation. The proposed convex and smooth loss function is somehow related to the ϵ-insensitive loss family and also inherits some beneficial properties of the correntropy loss function to handle very small and very large errors at once. Accordingly, it provides high resistance to various types of Gaussian, Uniform, and impulsive non-Gaussian noises. Moreover, there are theoretical discussions of the convergence and stability of the proposed method, its steady-state and transient behavior, its computational complexity, and an explanation of the parameter values. Simulation results show the superior performance of the presented algorithm in both stationary and non-stationary environments compared to some recent diffusion-based methods in the literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call