Abstract

AUC (Area Under ROC curve) is a widely used metric for measuring classification performance. Recently, due to the efficiency and scalability of online learning in handling large-scale problems, it attracts lots of interests to maximize AUC in online setting. Traditional online learning algorithms require a computation node to process all the data sequentially. However, due to the explosion in volume and velocity of modern data generated from distributed sources such as sensor network and Internet, it is not efficient to send all the data to one node for processing and updating model sequentially. To address this challenge, we explore the distributed learning techniques for maximizing AUC in online setting with a linear model. Specifically, we propose two different algorithms for solving the distributed online AUC maximization problem: (i) the Centralized Distributed One-Pass Online AUC Maximization (C-DOPOAM) algorithm in which the server updates the global model with the gradients computed from workers; and (ii) the Decentralized Distributed One-Pass Online AUC Maximization (D-DOPOAM) algorithm in which each worker updates its local model and exchanges information with its neighbors to achieve consensus. Moreover, we establish the regret bounds of the two proposed algorithms. They achieve the similar convergence rate with the state-of-the-art non-distributed online AUC maximization algorithms. Finally, we conduct extensive experiments to validate our theoretical result on different computation platforms and network configurations. We also make a comparison between C-DOPOAM and D-DOPOAM in poor network condition. The results show D-DOPOAM is more robust than C-DOPOAM against low bandwidth or high latency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call