Abstract

In this paper, we study gradient boosting with distributed data streams over multi-agent networks, and propose a distributed online gradient boosting algorithm. Considering limited communication resources and privacy, each node aims to track the minimum of a global, time-varying cost function based on its own data stream and some information of neighbors. We first formulate the global cost function as a sum of local ones, and then convert distributed online gradient boosting into a distributed online optimization problem. At each time step, the local model is updated by a gradient descent step based on the current data, followed by a consensus step with the neighbors. Then, we use a dynamic regret to measure the performance of the proposed algorithm, and prove that the regret has an O(T) bound. Simulations with some practical datasets illustrate the performance of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call