Abstract

Most first-order methods rely on the global Lipschitz continuity of the objective gradient, which fails to hold in many problems. This paper develops a sequential local optimization (SLO) framework for first-order algorithms to optimize problems without Lipschitz gradient. Operating on the assumption that the gradient is locally Lipschitz continuous over any compact set, SLO develops a careful scheme to control the distance between successive iterates. The proposed framework can easily adapt to the existing first-order methods, such as projected gradient descent (PGD), truncated gradient descent (TGD), and a parameter-free variant of Armijo linesearch. We show that SLO requires [Formula: see text] gradient evaluations to find an ϵ-stationary point, where Y is certain compact set with [Formula: see text] radius, and [Formula: see text] denotes the Lipschitz constant of the i-th order derivatives in Y. It is worth noting that our analysis provides the first nonasymptotic convergence rate for the (slight variant of) Armijo linesearch algorithm without globally Lipschitz continuous gradient or convexity. As a generic framework, we also show that SLO can incorporate more complicated subroutines, such as a variant of the accelerated gradient descent (AGD) method that can harness the problem’s second-order smoothness without Hessian computation, which achieves an improved [Formula: see text] complexity. Funding: J. Zhang is supported by the MOE AcRF [Grant A-0009530-04-00], from Singapore Ministry of Education. M. Hong is supported by NSF [Grants CIF-1910385 and EPCN-2311007]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/ijoo.2021.0029 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call