Abstract

This article focuses on online kernel learning over a decentralized network. Each agent in the network receives online streaming data and collaboratively learns a globally optimal nonlinear prediction function in the reproducing kernel Hilbert space (RKHS). To overcome the curse of dimensionality issue in traditional online kernel learning, we utilize random feature (RF) mapping to convert the nonparametric kernel learning problem into a fixed-length parametric one in the RF space. We then propose a novel learning framework, named online decentralized kernel learning via linearized ADMM (ODKLA), to efficiently solve the online decentralized kernel learning problem. To enhance communication efficiency, we introduce quantization and censoring strategies in the communication stage, resulting in the quantized and communication-censored ODKLA (QC-ODKLA) algorithm. We theoretically prove that both ODKLA and QC-ODKLA can achieve the optimal sublinear regret O(√T) over T time slots. Through numerical experiments, we evaluate the learning effectiveness, communication efficiency, and computation efficiency of the proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call