Abstract

An accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems is presented. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Beale–Powell restart criterion holds. The parameter scaling the gradient is selected as a spectral gradient. For the steplength computation the method has the advantage that in conjugate gradient algorithms the step lengths may differ from 1 by two order of magnitude and tend to vary unpredictably. Thus, we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in the function values is significantly improved. In mild conditions the algorithm is globally convergent for strongly convex functions. Computational results for a set consisting of 750 unconstrained optimization test problems show that this new accelerated scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods: SCALCG [3–6], CONMIN by Shanno and Phua (1976, 1978) [42,43], Hestenes and Stiefel (1952) [25], Polak–Ribiére–Polyak (1969) [32,33], Dai and Yuan (2001) [17], Dai and Liao (2001) ( t = 1 ) [14], conjugate gradient with sufficient descent condition [7], hybrid Dai and Yuan (2001) [17], hybrid Dai and Yuan zero (2001) [17], CG_DESCENT by Hager and Zhang (2005, 2006) [22,23], as well as quasi-Newton LBFGS method [26] and truncated Newton method by Nash (1985) [27].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call