Abstract
In this paper, a family of Steffensen-type methods of optimal order of convergence with two parameters is constructed by direct Newtonian interpolation. It satisfies the conjecture proposed by Kung and Traub (J. Assoc. Comput. Math. 1974, 21, 634–651) that an iterative method based on m evaluations per iteration without memory would arrive at the optimal convergence of order 2m-1 . Furthermore, the family of Steffensen-type methods of super convergence is suggested by using arithmetic expressions for the parameters with memory but no additional new evaluation of the function. Their error equations, asymptotic convergence constants and convergence orders are obtained. Finally, they are compared with related root-finding methods in the numerical examples.
Highlights
IntroductionBut different from the above for the multi-step methods
Solving the nonlinear equation f (x) = 0 is a fundamental problem in scientific computation.Besides Newton’s method (NM), Steffensen’s method (SM): xn+1 = xn − f 2, f (xn + f) − fn = 0, 1, 2, . . . (1)is a famous method for dealing with such a problem, because it is derivative free and maintains quadratic convergence
The proposed families are compared with NM, SM, self-acceleration of SM (SASM), RWBM and DPMs by solving some nonlinear equations in the following examples
Summary
But different from the above for the multi-step methods These expressions of γn ensure the methods to achieve super convergence by using the same number of evaluations of f as before.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have