Abstract

With explosion of data size and limited storage space at a single location, data are often distributed at different locations. We thus face the challenge of performing large-scale machine learning from these distributed data through communication networks. In this paper, we generalize the distributed dual coordinate ascent in a star network to a general tree structured network, and provide the convergence rate analysis of the general distributed dual coordinate ascent. In numerical experiments, we demonstrate that the performance of the distributed dual coordinate ascent in a tree network can outperform that of the distributed dual coordinate ascent in a star network when a network has a lot of communication delays between the center node and its direct child nodes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call