Abstract

Linear convergence rates of descent methods for unconstrained minimization are usually proved under the assumption that the objective function is strongly convex. Recently it was shown that the weaker assumption of restricted strong convexity suffices for linear convergence of the ordinary gradient descent method. A decisive difference to strong convexity is that the set of minimizers of a restricted strongly convex function need be neither a singleton nor bounded. In this paper we extend the linear convergence results under this weaker assumption to a larger class of descent methods including restarted nonlinear CG, BFGS, and its damped limited memory variants, L-D-BFGS. For twice continuously differentiable objective functions we even obtain r-step superlinear convergence for the CG_DESCENT conjugate gradient method of Hager and Zhang, where r is greater than or equal to the rank of the Hessian at a minimizer. This is remarkable since the Hessian of a restricted strongly convex function need not have full rank. Furthermore we show that convex quadratic splines and objective functions of the unconstrained duals to some linearly constrained optimization problems are restricted strongly convex. In particular this holds for the regularized basis pursuit problem and its analogues for nuclear norm minimization and principal component pursuit.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call