Abstract

This paper proposes a synchronous parallel block coordinate descent algorithm for minimizing a composite function, which consists of a smooth convex function plus a non-smooth but separable convex function. Due to the generalization of the proposed method, some existing synchronous parallel algorithms can be considered as special cases. To tackle high dimensional problems, the authors further develop a randomized variant, which randomly update some blocks of coordinates at each round of computation. Both proposed parallel algorithms are proven to have sub-linear convergence rate under rather mild assumptions. The numerical experiments on solving the large scale regularized logistic regression with l1 norm penalty show that the implementation is quite efficient. The authors conclude with explanation on the observed experimental results and discussion on the potential improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call