Abstract

In this paper, a descent Liu–Storey conjugate gradient method is extended to solve large-scale nonlinear systems of equations. Based on certain assumptions, the global convergence property is obtained with a nonmonotone line search. The proposed method is suitable to solve large-scale problems for the low-storage requirement. Numerical experiment results show that the new method is practically effective.

Highlights

  • IntroductionWhere (F: Rn ⟶ Rn) has continuous partial derivatives. In the past few decades, Newton’s methods have been widely used to solve problem (1) for their fast convergence speeds, see [1,2,3,4,5,6,7,8,9,10]

  • Consider the following nonlinear system of equations: F(x) 0, (1)where (F: Rn ⟶ Rn) has continuous partial derivatives

  • La Cruz and Raydan [11] designed the Spectral Algorithm for Nonlinear Equations (SANE) to solve (1) and analyzed the convergence property based on a modified Grippo–Lampariello–Lucidi (GLL) [12] nonmonotone line search

Read more

Summary

Introduction

Where (F: Rn ⟶ Rn) has continuous partial derivatives. In the past few decades, Newton’s methods have been widely used to solve problem (1) for their fast convergence speeds, see [1,2,3,4,5,6,7,8,9,10]. Erefore, Newton’s methods are not suitable for solving a large-scale problem in which the Jacobian matrix of F is unavailable or needs massive storage space To overcome this shortcoming of Newton’s methods, in this paper, we attempt to develop a numerical algorithm based on a descent Liu–Storey conjugate gradient method and a nonmonotone line research. Yuan and Zhang [29] proposed a numerical method for large-scale nonlinear systems of equations based on a three-term Polak–Ribiere–Polyak conjugate gradient method and the hyperplane projection strategy given by Solodov and Svaiter [8]. Inspired by the CG-type methods for the nonlinear system of equations, in this paper, we attempt to extend the MLS conjugate gradient method to solve problem (1) for the sufficient descent property, global convergence, and excellent numerical performance. Roughout this paper, we denote the Euclidean norm of vectors by ‖ · ‖, and J(x) means the Jacobian matrix of F at x

Algorithm
Convergence Property
Numerical Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call