Abstract

Based on a modified secant equation proposed by Li and Fukushima, we derive a stepsize for the Barzilai–Borwein gradient method. Then, using the newly proposed stepsize and another effective stepsize proposed by Dai et al. in an adaptive scheme that is based on the objective function convexity, we suggest a modified two-point stepsize gradient algorithm. We also show that the limit point of the sequence generated by our algorithm is first-order critical. Finally, our numerical comparisons done on a set of unconstrained optimization test problems from the CUTEr collection are presented. At first, we compare the performance of our algorithm with two other two-point stepsize gradient algorithms proposed by Dai et al. and Raydan. Then, numerical comparisons between the implementations of our algorithm and two conjugate gradient methods proposed by Gilbert and Nocedal, and Hestenes and Stiefel, and also, with the memoryless BFGS algorithm proposed by Liu and Nocedal, are made. Furthermore, to provide a numerical support for our adaptive approach, we compare two other two-point stepsize gradient algorithms, one of which applies the stepsize proposed by Dai et al. and the other applies our newly proposed stepsize, with our algorithm which applies both of these stepsizes together. Numerical results demonstrating the efficiency of our algorithm, in the sense of the performance profile introduced by Dolan and Moré, are reported.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call