Abstract

A derivative-free quasi-Newton-type algorithm in which its search direction is a product of a positive definite diagonal matrix and a residual vector is presented. The algorithm is simple to implement and has the ability to solve large-scale nonlinear systems of equations with separable functions. The diagonal matrix is simply obtained in a quasi-Newton manner at each iteration. Under some suitable conditions, the global and R-linear convergence result of the algorithm are presented. Numerical test on some benchmark separable nonlinear equations problems reveal the robustness and efficiency of the algorithm.

Highlights

  • Consider the problem of finding a solution of nonlinear system of equations g(x) = 0, (1.1)where g = (g1, g2, . . . gn) : Rn → Rn is a separable function

  • We incorporate the diagonal Hessian approximation approach studied by Deng and Wan [2] and the spectral residual approach presented in [13] to propose, analyze and implement a derivative-free algorithm for separable problems, which can be seen as an improved version of the dfsane algorithm that used a positive definite diagonal matrix as the approximation of the Jacobian of the function g

  • We have presented, analyzed, and implemented a derivative-free quasi-Newton-type algorithm for solving nonlinear systems of equations with separable functions

Read more

Summary

Introduction

Derivative-free methods, quasi-Newton-type methods, convergence, numerical experiments. For finding the solution of general nonlinear equations, quasi-Newton methods are famous and commonly used algorithms because of their derivative-free nature [17, 21] Among these methods, some are not suitable for large-scale problems due to matrix storage requirements. We incorporate the diagonal Hessian approximation approach studied by Deng and Wan [2] and the spectral residual approach presented in [13] to propose, analyze and implement a derivative-free algorithm for separable problems, which can be seen as an improved version of the dfsane algorithm that used a positive definite diagonal matrix as the approximation of the Jacobian of the function g. ‖ · ‖ stands for the Euclidean norm of vectors and the induced 2-norm of matrices

Preliminaries and algorithm
Convergence results
Numerical experiments
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call