Abstract

<abstract><p>There are numerous applications for finding zero of derivatives in function optimization. In this paper, a two-step fourth-order method was presented for finding a zero of the derivative. In the research process of iterative methods, determining the ball of convergence was one of the important issues. This paper discussed the radii of the convergence ball, uniqueness of the solution, and the measurable error distances. In particular, in contrast to Wang's method under hypotheses up to the fourth derivative, the local convergence of the new method was only analyzed under hypotheses up to the second derivative, and the convergence order of the new method was increased to four. Furthermore, different radii of the convergence ball was determined according to different weaker hypotheses. Finally, the convergence criteria was verified by three numerical examples and the new method was compared with Wang's method and the same order method by numerical experiments. The experimental results showed that the convergence order of the new method is four and the new method has higher accuracy at the same cost, so the new method is finer.</p></abstract>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call