Abstract

We consider an iterative computation of negative curvature directions, in large-scale unconstrained optimization frameworks, needed for ensuring the convergence toward stationary points which satisfy second-order necessary optimality conditions. We show that to the latter purpose, we can fruitfully couple the conjugate gradient (CG) method with a recently introduced approach involving the use of the numeral called Grossone. In particular, recalling that in principle the CG method is well posed only when solving positive definite linear systems, our proposal exploits the use of grossone to enhance the performance of the CG, allowing the computation of negative curvature directions in the indefinite case, too. Our overall method could be used to significantly generalize the theory in state-of-the-art literature. Moreover, it straightforwardly allows the solution of Newton’s equation in optimization frameworks, even in nonconvex problems. We remark that our iterative procedure to compute a negative curvature direction does not require the storage of any matrix, simply needing to store a couple of vectors. This definitely represents an advance with respect to current results in the literature.

Highlights

  • We consider the solution of the nonconvex unconstrained optimization problem minx∈Rn f (x), where f : Rn → R is a nonlinear smooth function and n is large.Communicated by Alexandre Cabot.Journal of Optimization Theory and Applications (2020) 186:554–589Despite the use of the term ‘minimization’ in the last problem, most of the methods proposed in the literature generate a sequence of points {xk}, which is only guaranteed to converge to stationary points

  • This is obtained at the cost of a slight modification of matrix Linto L : We shortly prove that this arrangement can allow the computation of a bounded negative curvature direction d j at x j

  • Our proposal exploits the simplicity of an algebra associated with the numeral grossone, which was recently introduced in the literature to handle infinite and infinitesimal quantities

Read more

Summary

Introduction

We consider the solution of the nonconvex unconstrained optimization problem minx∈Rn f (x), where f : Rn → R is a nonlinear smooth function and n is large. Observe that additional care when using the latter methods is definitely mandatory, since imposing standard first-order stationarity conditions may not in general ensure convexity of the quadratic model of the objective function, in a neighborhood of the solution points In this regard, the computation of so-called negative curvature directions for the objective function is an essential tool (see the recent papers [4,8]), to guarantee convergence to stationary points which satisfy second-order necessary conditions. In [3,5] the direction d j is obtained as by-product of the Krylov-subspace method applied for solving Newton’s equation, though an expensive storage is required in [5] and a heavy computational burden is necessary in the approach proposed in [3]. With · , we indicate the Euclidean norm. λ[ A] is a general eigenvalue of matrix A ∈ Rn×n, and A 0 [ A 0] indicates that A is positive definite [semidefinite]. ek ∈ Rn represents the kth unit vector, while the symbol 1 represents the numeral grossone (see [13])

Negative Curvature Directions in Truncated Newton Methods
A Brief Introduction to the 1-Based Computational Methodology
The Matrix Factorizations We Need
Our Proposal
Apk s1
Numerical Experience
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call