Abstract

In this paper, we study distributed learning with multi-penalty regularization based on a divide-and-conquer approach. Using Neumann expansion and a second order decomposition on difference of operator inverses approach, we derive optimal learning rates for distributed multi-penalty regularization in expectation. As a byproduct, we also deduce optimal learning rates for multi-penalty regularization, which was not given in the literature. These results are applied to the distributed manifold regularization and optimal learning rates are given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call