Abstract

This study employs exact line search iterative algorithms for solving large scale unconstrained optimization problems in which the direction is a three-term modification of iterative method with two different scaled parameters. The objective of this research is to identify the effectiveness of the new directions both theoretically and numerically. Sufficient descent property and global convergence analysis of the suggested methods are established. For numerical experiment purposes, the methods are compared with the previous well-known three-term iterative method and each method is evaluated over the same set of test problems with different initial points. Numerical results show that the performances of the proposed three-term methods are more efficient and superior to the existing method. These methods could also produce an approximate linear regression equation to solve the regression model. The findings of this study can help better understanding of the applicability of numerical algorithms that can be used in estimating the regression model.

Highlights

  • The steepest descent (SD) method, founded in 1847 by [1], is said to be the simplest gradient and iterative method for minimization of nonlinear optimization problems without constraints.This method is categorized in a single-objective optimization problem which attempts to obtain only one optimal solution [2]

  • Since far too little attention has been paid to the modification of the search direction for this method, this study suggests the three-term direction to solve large-scale unconstrained optimization functions

  • The results showed that RRM is the fastest solver for about 76.79% of the 14 selected test problems and solved 100% of the problem

Read more

Summary

Introduction

The steepest descent (SD) method, founded in 1847 by [1], is said to be the simplest gradient and iterative method for minimization of nonlinear optimization problems without constraints. The standard SD method for solving unconstrained optimization function is defined as minr f (x) x∈R has the following form of direction dk = −gk where f (x) is a continuous differential function in Rn and gk = ∇ f (xk ) This minimization method has the following iterative form xk+1 = xk + λk dk (1). Line search rules is one of the methods to compute (1) by estimating the direction, dk and the step size, λk. It can be classified into two types, exact line search and inexact line search rules. A brief conclusion and some future recommendations are provided in the last section of this paper

Evolution of Steepest Descent Method
Algorithm and Convergence Analysis of New Three-Term Search Direction
Sufficient Descent Conditions
Global Convergence
Numerical Experiments
Implementation in the Regression
Findings
Conclusions and Future Recommendations

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.