Abstract

A root-finding method is developed that, like Newton’s Method, evaluates both the function and its first derivative once per iteration, but the new method converges at the rate 3+1, and moreover, it’s asymptotic error constant is proportional to the function’s fourth order derivative. By contrast, Newton’s Method converges quadratically with the asymptotic error constant being proportional to the function’s second order derivative. Each iteration (except the first) of our Accelerated Newton’s Method (ANM) uses the values of both the function and its first derivative at the previous iteration in order to estimate the function’s second derivative. For the initial iteration we develop and recommend the use of a modified version of Jarratt’s Method; a method that calculates the derivative of the function twice in each iteration. Like Jarratt’s Method, our modification of it converges with the fourth power of the initial error, but our asymptotic error constant depends primarily on the product of the function’s second and third order derivatives rather than depending separately on the value of the second derivative. The efficient performance of our Accelerated Newton’s Method (ANM) is illustrated using nine test functions and a range of initial values for each test function. These tests indicate that our Accelerated Newton’s Method requires on average 30% fewer function and derivative evaluations than the straightforward Newton’s Method to achieve the same accuracy; noting again that the function and its derivative are evaluated once per iteration, exactly as in Newton’s Method. Moreover, we find that our Accelerated Newton’s Method is more robust than Newton’s Method in that it converges to the root over a wider range of initial conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call