Abstract

Problem statement: The major weaknesses of Newton method for nonlinear equations entail computation of Jacobian matrix and solving systems of n linear equations in each of the iterations. Approach: In some extent function derivatives are quit costly and Jacobian is computationally expensive which requires evaluation (storage) of n×n matrix in every iteration. Results: This storage requirement became unrealistic when n becomes large. We proposed a new method that approximates Jacobian into diagonal matrix which aims at reducing the storage requirement, computational cost and CPU time, as well as avoiding solving n linear equations in each iterations. Conclusion/Recommendations: The proposed method is significantly cheaper than Newton’s method and very much faster than fixed Newton’s method also suitable for small, medium and large scale nonlinear systems with dense or sparse Jacobian. Numerical experiments were carried out which shows that, the proposed method is very encouraging.

Highlights

  • Consider the system of nonlinear equations: F(x) = 0 (1)where, F(x) : Rn → Rn with the following properties: The convergence of Algorithm CN is attractive

  • The Newton’s method for nonlinear equations has the following general form: Given an initial point x0, we compute a sequence of corrections {sk} and iterates {xk} as follows: Algorithm CN (Newton’s method): where, k = 0, 1, 2... and JF is the Jacobian matrix of F, : Stage 1: Solve JF sk = -F(xk) Stage 2: Update xk+1 = xk + sk Stage 3: Repeat 1-2 until converges

  • We propose the approximation of F'(xk ) by a introduced to conquer some of the shortfalls diagonal matrix. i.e.: (Drangoslav and Natasa, 1996; Hao and Qin, 2008; Natasa and Zorna, 2001), but most of the modifications F'(xk ) ≈ Dk requires to computes and store an n×n matrix (Jacobian) in each iterations (Natasa and Zorna, 2001)

Read more

Summary

Introduction

Consider the system of nonlinear equations: F(x) = 0 (1)where, F(x) : Rn → Rn with the following properties: The convergence of Algorithm CN is attractive. However, the method depends on a good starting point (Dennis, 1983). Newton’s method will converges to x* provided the initial guess x0 is sufficiently close to the x* and JF (x*) ≠ 0 with JF (x) Lipchitz continuous and the rate is quadratic (Dennis, 1983), i.e.:• There exist x* with F(x*) = 0 • F is continuously differentiable in a neighbourhood of x* • F'(x*) = JF (x*) ≠ 0The most well-known method for solving (1), is the classical Newton’s method. However, the Newton’s method for nonlinear equations has the following general form: Given an initial point x0, we compute a sequence of corrections {sk} and iterates {xk} as follows: Algorithm CN (Newton’s method): where, k = 0, 1, 2... and JF (xk) is the Jacobian matrix of F, then: Stage 1: Solve JF (xk ) sk = -F(xk) Stage 2: Update xk+1 = xk + sk Stage 3: Repeat 1-2 until converges.xk+1 − x * ≤ h xk − x * (2)For some h. Even though it has good qualities, CN method has some major shortfalls as the dimension of the systems increases which includes (Dennis, 1983) for details):• Computation and storage of Jacobian in each iteration• Solving system of n linear equations in each iteration• More CPU time consumption as the equations dimension increases

Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.