Abstract

In this paper, we propose a regularized Newton method without line search. The proposed method controls a regularization parameter instead of a step size in order to guarantee the global convergence. We show that the proposed algorithm has the following convergence properties. (a) The proposed algorithm has global convergence under appropriate conditions. (b) It has superlinear rate of convergence under the local error bound condition. (c) An upper bound of the number of iterations required to obtain an approximate solution $$x$$x satisfying $$\Vert \nabla f(x) \Vert \le \varepsilon $$??f(x)?≤? is $$O(\varepsilon ^{-2})$$O(?-2), where $$f$$f is the objective function and $$\varepsilon $$? is a given positive constant.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call