Abstract

Composite convex optimization models arise in several applications and are especially prevalent in inverse problems with a sparsity inducing norm and in general convex optimization with simple constraints. The most widely used algorithms for convex composite models are accelerated first order methods; however, they can take a large number of iterations to compute an acceptable solution for large-scale problems. In this paper we propose speeding up first order methods by taking advantage of the structure present in many applications and in image processing in particular. Our method is based on multilevel optimization methods and exploits the fact that many applications that give rise to large-scale models can be modeled using varying degrees of fidelity. We use Nesterov's acceleration techniques together with the multilevel approach to achieve an $\mathcal{O}(1/\sqrt{\epsilon})$ convergence rate, where $\epsilon$ denotes the desired accuracy. The proposed method has a better convergence rate than any other existing multilevel method for convex problems and in addition has the same rate as accelerated methods, which is known to be optimal for first order methods. Moreover, as our numerical experiments show, on large-scale face recognition problems our algorithm is several times faster than the state of the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call