Abstract
We used a Kurchatov-type accelerator to construct an iterative method with memory for solving nonlinear systems, with sixth-order convergence. It was developed from an initial scheme without memory, with order of convergence four. There exist few multidimensional schemes using more than one previous iterate in the very recent literature, mostly with low orders of convergence. The proposed scheme showed its efficiency and robustness in several numerical tests, where it was also compared with the existing procedures with high orders of convergence. These numerical tests included large nonlinear systems. In addition, we show that the proposed scheme has very stable qualitative behavior, by means of the analysis of an associated multidimensional, real rational function and also by means of a comparison of its basin of attraction with those of comparison methods.
Highlights
New and efficient iterative techniques are needed for obtaining the solution ξ of a system of nonlinear equations of the form
We provide a deep analysis of the suggested scheme regarding the order of convergence (Section 2) and its stability properties, constructing an associated multidimensional discrete dynamical system
Combining the Traub-Steffensen family of methods and a second step with different divided-difference operators, we propose the class of iterative schemes described as y(j) = x(j) − [u(j), x(j); F]−1F(x(j)), (6)
Summary
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. The first multidimensional derivative-free method was proposed by Samanskii in [6], by replacing the Jacobian matrix F with the divided difference operator: x(j+1) = x(j) − [x(j) + F(x(j)), x(j); F]−1F(x(j)), j = 0, 1, This scheme keeps the quadratic order of convergence of Newton’s procedure. Different scalar iterative schemes with memory have been designed (a good overview can be found in [8]), mostly derivative-free ones These have been constructed with increasing orders of convergence, and with increasing computational complexity. Some methods with memory have been developed which improve the convergence rate of Steffensen’s method or Steffensen-type methods at the expense of additional evaluations of vector functions, divided difference or changes in the points of iterations. Some conclusions and the references used bring this manuscript to an end
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have