Abstract

We present a new algorithm for nonconvex bound-constrained quadratic optimization. In the strictly convex case, our method is equivalent to the state-of-the-art algorithm by Dostál and Schöberl [Comput. Optim. Appl., 30 (2005), pp. 23--43]. Unlike their method, however, we establish a convergence theory for our algorithm that holds even when the problems are nonconvex. This is achieved by carefully addressing the challenges associated with directions of negative curvature, in particular, those that may naturally arise when applying the conjugate gradient algorithm to an indefinite system of equations. Our presentation and analysis deal explicitly with both lower and upper bounds on the optimization variables, whereas the work by Dostál and Schöberl considers only strictly convex problems with lower bounds. To handle this generality, we introduce the reduced chopped gradient that is analogous to the reduced free gradient previously used. The reduced chopped gradient leads to a new condition that is used to determine when optimization over a given subspace should be terminated. This condition, although not equivalent, is motivated by a similar condition used by Dostál and Schöberl. Numerical results illustrate the superior performance of our method over commonly used solvers that employ gradient projection steps and subspace acceleration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call