Abstract

ABSTRACT Two transformations of gradient-descent iterative methods for solving unconstrained optimization are proposed. The first transformation is called modification and it is defined using a small enlargement of the step size in various gradient-descent methods. The second transformation is termed as hybridization and it is defined as a composition of gradient-descent methods with the Picard–Mann hybrid iterative process. As a result, several accelerated gradient-descent methods for solving unconstrained optimization problems are presented, investigated theoretically and numerically compared. The proposed methods are globally convergent for uniformly convex functions satisfying certain condition under the assumption that the step size is determined by the backtracking line search. In addition, the convergence on strictly convex quadratic functions is discussed. Numerical comparisons show better behaviour of the proposed methods with respect to some existing methods in view of the Dolan and Moré's performance profile with respect to all analysed characteristics: number of iterations, the CPU time, and the number of function evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call