Abstract

We propose an improved variant of the accelerated gradient optimization models for solving unconstrained minimization problems. Merging the positive features of either double direction, as well as double step size accelerated gradient models, we define an iterative method of a simpler form which is generally more effective. Performed convergence analysis shows that the defined iterative method is at least linearly convergent for uniformly convex and strictly convex functions. Numerical test results confirm the efficiency of the developed model regarding the CPU time, the number of iterations and the number of function evaluations metrics.

Highlights

  • Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations

  • Taking into account the iterative form of the accelerated accelerated double direction (ADD) method as well as good performance features of the accelerated double step size accelerated double step-size (ADSS) scheme, considering all three tested metrics, we propose the following iterative model for solving a large scale of unconstrained minimization problems: xk+1 = xk − αkγk−1 + α2k gk ≡ xk − αkγk−1gk − α2k gk

  • From the results presented in [4], we know that the second search direction dk defined in the ADD iteration by (11) causes an increase in the number of function evaluations

Read more

Summary

Modified Accelerated Double Direction and Double Step Size Method

Since the modADS belongs to the class of accelerated double direction and double step size methods and presents a merged form of the ADD and the ADSS iteration, the choice to keep α2k as the second step length value was a natural one. According to the TADSS iteration (14), it could be said that the TADSS corresponds to a different choice of second step size βk of the ADSS iteration This is a motivation to define the modADS in a presented way and to compare the performance features of these two similar approaches. Instead of using an additional inexact line search technique to calculate the second iterative step length value, in the modADS, we use only one Backtracking procedure and define the second step length parameter as the quadratic value of the Backtracking outcome αk This way, we provide a decrease in the computational time, number of needed iterations and function evaluations.

Set of Uniformly Convex Functions
Set of Strictly Convex Quadratics
Numerical Outcomes and Comparative Analysis
Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call