Abstract

Over the last two decades, it has been observed that using the gradient vector as a search direction in large-scale optimization may lead to efficient algorithms. The effectiveness relies on choosing the step lengths according to novel ideas that are related to the spectrum of the underlying local Hessian rather than related to the standard decrease in the objective function. A review of these so-called spectral projected gradient methods for convex constrained optimization is presented. To illustrate the performance of these low-cost schemes, an optimization problem on the set of positive definite matrices is described.

Highlights

  • In 1988, a pioneering paper by Barzilai and Borwein [9] proposed a gradient method for the unconstrained minimization of a differentiable function f : Rn → R that uses a novel and nonstandard strategy for choosing the step length

  • Optimization problems on the space of matrices, which are restricted to the convex set of positive definite matrices, arise in various applications, such as statistics, as well as financial mathematics, model updating, and in general in matrix least-squares settings; see, e.g. [30, 48, 50, 66, 67, 95]

  • To illustrate the use of the Spectral Projected Gradient (SPG) method, we describe a classification scheme that can be written as an optimization problem on the convex set of positive definite matrices

Read more

Summary

Introduction

In 1988, a pioneering paper by Barzilai and Borwein [9] proposed a gradient method for the unconstrained minimization of a differentiable function f : Rn → R that uses a novel and nonstandard strategy for choosing the step length. Writing the secant equation as Hk+1yk = sk, which is standard in the Quasi-Newton tradition, we arrive at a different spectral coefficient: (ykT yk)/(sTk yk); see [9, 83] Both this dual and the primal (6) spectral choices of step lengths produce fast and effective nonmonotone gradient methods for large-scale unconstrained optimization [52, 54, 84]. A key feature is to accept the initial BB-type step length as frequently as possible while simultaneously guarantee global convergence For this reason, the SPG method employs a nonmonotone line search that does not impose functional decrease at every iteration.

A matrix problem on the set of positive definite matrices
Applications and extensions
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.