Abstract

This paper deals with constrained convex problems, where the objective function is smooth and strongly convex, and the feasible set is described by a large number of closed convex (possibly nonpolyhedral) sets. In order to deal efficiently with the complicated constraints, we consider a dual formulation of this problem. We prove that the corresponding dual function satisfies a quadratic growth property on any sublevel set, provided that the primal objective function is smooth and strongly convex and the feasible set verifies Slater’s condition. We propose (accelerated) random coordinate descent algorithms that use the special composite structure of the dual problem. However, with the existing theory, one can prove only that such methods converge sublinearly. Based on our new quadratic growth property derived for the dual problem, we show that such methods have faster convergence, that is, the dual (accelerated) random coordinate descent algorithms converge linearly. Besides providing a general dual framework for the analysis of random dual coordinate descent schemes, our results resolve an open problem in the literature related to the convergence of the Dykstra algorithm on the best feasibility problem for a collection of convex sets. That is, we establish linear convergence rate for the random Dykstra algorithm when the convex sets just satisfy Slater’s condition and derive also a new accelerated variant for the Dykstra algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call