Abstract

It is well known that the Newton method may not converge when the initial guess does not belong to a specific quadratic convergence region. We propose a family of new variants of the Newton method with the potential advantage of having a larger convergence region as well as more desirable properties near a solution. We prove quadratic convergence of the new family, and provide specific bounds for the asymptotic error constant. We illustrate the advantages of the new methods by means of test problems, including two and six variable polynomial systems, as well as a challenging signal processing example. We present a numerical experimental methodology which uses a large number of randomized initial guesses for a number of methods from the new family, in turn providing advice as to which of the methods employed is preferable to use in a particular search domain.

Highlights

  • Newton’s method and its variants are a fundamental tool for solving nonlinear equations

  • 6 Conclusion and discussion We have proposed a family of generalized Newton methods facilitated by an auxiliary, or generalizing, function s, for solving systems of nonlinear equations

  • The method reduces to the classical Newton method if the generalizing function is the identity map, i.e., s(x) = x

Read more

Summary

Introduction

Newton’s method and its variants are a fundamental tool for solving nonlinear equations. When started at an initial guess close to a solution, Newton’s method is well defined and converges quadratically to a solution of (1), unless the Jacobian of f is singular or the second partial derivatives of f are not bounded. For suitable choices of s, we illustrate via extensive numerical experiments that the region of convergence corresponding to the new method may be larger than the one observed for the classical Newton iteration. 3, we establish the quadratic convergence results for the generalized Newton’s method. Definition 2.3 Let g : Rn → Rn be twice continuously differentiable. Definition 2.4 Given a symmetric matrix A ∈ Rn×n, denote its set of eigenvalues by (A). The concepts of rate of convergence and the asymptotic error constant will have an important role in our analysis, so we recall their definitions next.

Main assumptions
Bounds on the asymptotic error constant λ
Findings
Conclusion and discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.