Abstract

Distributed learning has attracted considerable attention in recent years due to its power to deal with big data in various science and engineering problems. Based on a divide-and-conquer strategy, this paper studies the distributed robust regression algorithm associated with correntropy losses and coefficient regularization in the scheme of kernel networks, where the kernel functions are not required to be symmetric or positive semi-definite. We establish explicit convergence results of such distributed algorithm depending on the number of data partitions, robustness and regularization parameters. We show that with suitable parameter choices the distributed robust algorithm can obtain the optimal convergence rate in the minimax sense, and simultaneously reduce the computational complexity and memory requirement in the standard (non-distributed) algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call