A globalized inexact semismooth Newton method for nonsmooth fixed-point equations involving variational inequalities
Abstract We develop a semismooth Newton framework for the numerical solution of fixed-point equations that are posed in Banach spaces. The framework is motivated by applications in the field of obstacle-type quasi-variational inequalities and implicit obstacle problems. It is discussed in a general functional analytic setting and allows for inexact function evaluations and Newton steps. Moreover, if a certain contraction assumption holds, we show that it is possible to globalize the algorithm by means of the Banach fixed-point theorem and to ensure q -superlinear convergence to the problem solution for arbitrary starting values. By means of a localization technique, our Newton method can also be used to determine solutions of fixed-point equations that are only locally contractive and not uniquely solvable. We apply our algorithm to a quasi-variational inequality which arises in thermoforming and which not only involves the obstacle problem as a source of nonsmoothness but also a semilinear PDE containing a nondifferentiable Nemytskii operator. Our analysis is accompanied by numerical experiments that illustrate the mesh-independence and q -superlinear convergence of the developed solution algorithm.
- Conference Article
2
- 10.2118/51908-ms
- Feb 14, 1999
For the solution of nonlinear equations resulting due to the implicit discretization of reservoir simulation partial differential equations, Newton's method has been an overwhelming choice. By providing robust and a considerably faster performance, the inexact Newton's methods offer attractive alternatives. Several Newton-Krylov inexact methods were examined for the solution of nonlinear equations arising in the finite-element discretization of a discrete-fracture reservoir simulation model. Two of the methods were found to be superior to the Newton's method. These techniques also had the additional advantage of being amenable to effective parallelization.
- Research Article
25
- 10.1137/s105262340343590x
- Jan 1, 2005
- SIAM Journal on Optimization
Based on the identification of indices active at a solution of the mixed complementarity problem (MCP), we propose a class of Newton methods for which local superlinear convergence holds under extremely mild assumptions. In particular, the error bound condition needed for the identification procedure and the nondegeneracy condition needed for the convergence of the resulting Newton method are individually and collectively strictly weaker than the property of semistability of a solution. Thus the local superlinear convergence conditions of the presented method are weaker than conditions required for the semismooth (generalized) Newton methods applied to MCP reformulations. Moreover, they are also weaker than convergence conditions of the linearization (Josephy--Newton) method. For the special case of optimality systems with primal-dual structure, we further consider the question of superlinear convergence of primal variables. We illustrate our theoretical results with numerical experiments on some specially constructed MCPs whose solutions do not satisfy the usual regularity assumptions.
- Research Article
- 10.1007/s10957-006-9153-1
- Nov 11, 2006
- Journal of Optimization Theory and Applications
The Newton method and the inexact Newton method for solving quasidifferentiable equations via the quasidifferential are investigated. The notion of Q-semismoothness for a quasidifferentiable function is proposed. The superlinear convergence of the Newton method proposed by Zhang and Xia is proved under the Q-semismooth assumption. An inexact Newton method is developed and its linear convergence is shown.
- Research Article
33
- 10.1137/130926730
- Jan 1, 2015
- SIAM Journal on Optimization
International audience
- Research Article
3
- 10.1007/s10957-016-0917-y
- Mar 11, 2016
- Journal of Optimization Theory and Applications
Constraint reduction is an essential method because the computational cost of the interior point methods can be effectively saved. Park and O'Leary proposed a constraint-reduced predictor---corrector algorithm for semidefinite programming with polynomial global convergence, but they did not show its superlinear convergence. We first develop a constraint-reduced algorithm for semidefinite programming having both polynomial global and superlinear local convergences. The new algorithm repeats a corrector step to have an iterate tangentially approach a central path, by which superlinear convergence can be achieved. This study proves its convergence rate and shows its effective cost saving in numerical experiments.
- Single Report
- 10.2172/6132932
- Jan 1, 1990
During the 1986--1989 project period, two major areas of research developed into which most of the work fell: matrix-free'' methods for solving linear systems, by which we mean iterative methods that require only the action of the coefficient matrix on vectors and not the coefficient matrix itself, and Newton-like methods for underdetermined nonlinear systems. In the 1990 project period of the renewal grant, a third major area of research developed: inexact Newton and Newton iterative methods and their applications to large-scale nonlinear systems, especially those arising in discretized problems. An inexact Newton method is any method in which each step reduces the norm of the local linear model of the function of interest. A Newton iterative method is any implementation of Newton's method in which the linear systems that characterize Newton steps (the Newton equations'') are solved only approximately using an iterative linear solver. Newton iterative methods are properly considered special cases of inexact Newton methods. We describe the work in these areas and in other areas in this paper.
- Research Article
6
- 10.1016/j.amc.2012.11.018
- Dec 20, 2012
- Applied Mathematics and Computation
The shift techniques for a nonsymmetric algebraic Riccati equation
- Research Article
3
- 10.1016/j.amc.2004.03.002
- Apr 23, 2004
- Applied Mathematics and Computation
Inexact block Newton methods for solving nonlinear equations
- Supplementary Content
71
- 10.1080/10556780310001636369
- Jun 1, 2004
- Optimization Methods and Software
The semismooth Newton method is a nonsmooth Newton-type method applied to a suitable reformulation of the complementarity problem as a nonlinear and nonsmooth system of equations. It is one of the standard methods for solving these kind of problems, and it can be implemented in an inexact way so that all linear systems of equations have to be solved only inexactly. However, from a practical point of view, this inexact Newton method seems to have a significantly worse behavior than its exact counterpart. The aim of this paper is therefore to show that the inexact Newton method can also be used in a reliable and efficient way at least for some classes of problems. We illustrate this statement by some numerical examples with up to one million variables.
- Research Article
6
- 10.1080/00207160701870845
- Aug 1, 2009
- International Journal of Computer Mathematics
In this study we are concerned with the local convergence of a Newton-type method introduced by us [I.K. Argyros and D. Chen, On the midpoint iterative method for solving nonlinear equations in Banach spaces, Appl. Math. Lett. 5 (1992), pp. 7–9.] for approximating a solution of a nonlinear equation in a Banach space setting. This method has also been studied by Homeier [H.H.H. Homeier, A modified Newton method for rootfinding with cubic convergence, J. Comput. Appl. Math. 157 (2003), pp. 227–230.] and Özban [A.Y. Özban, Some new variants of Newton's method, Appl. Math. Lett. 17 (2004), pp. 677–682.] in real or complex space. The benefits of using this method over other methods using the same information have been explained in [I.K. Argyros, Computational theory of iterative methods, in Studies in Computational Mathematics, Vol. 15, C.K. Chui and L. Wuytack, eds., Elsevier Science Inc., New York, USA, 2007.; I.K. Argyros and D. Chen, On the midpoint iterative method for solving nonlinear equations in Banach spaces, Appl. Math. Lett. 5 (1992), pp. 7–9.; H.H.H. Homeier, A modified Newton method for rootfinding with cubic convergence, J. Comput. Appl. Math. 157 (2003), pp. 227–230.; A.Y. Özban, Some new variants of Newton's method, Appl. Math. Lett. 17 (2004), pp. 677–682.]. Here, we give the convergence radii for this method under a type of weak Lipschitz conditions proven to be fruitful by Wang in the case of Newton's method [X. Wang, Convergence of Newton's method and inverse function in Banach space, Math. Comput. 68 (1999), pp. 169–186 and X. Wang, Convergence of Newton's method and uniqueness of the solution of equations in Banach space, IMA J. Numer. Anal. 20 (2000), pp. 123–134.]. Numerical examples are also provided.
- Supplementary Content
- 10.1088/0266-5611/14/2/011
- Apr 1, 1998
- Inverse Problems
The Newsletter is a key element in further enhancing the value of the journal to the inverse problems community. So why not be a part of this exciting forum by sending to our Bristol office material suitable for inclusion under any of the categories mentioned above. Your contributions will be very welcome. Book reviewSome Newton Type Methods for the Regularization of Nonlinear Ill-Posed Problems Schriften der Johannes-Kepler-Universität Linz, Reihe C: Technik und Naturwissenschaften, Band 15 B Blaschke 1996 Linz: Universitätsverlag Rudolf Trauner 145 pp ISBN 3-85320-816-9 öS248.00, DM34.00, sFr31.50 The book under review is the PhD thesis of Barbara Kaltenbacher-Blaschke prepared at the Institut für Mathematik of the Johannes-Kepler-Universität Linz, Austria. This book summarizes the author's papers [3,4,6]. In this book Newton type methods for the solution of nonlinear ill-posed problems are analysed. Bakushinskii [1] was most probably the first to analyse a Newton type method for the solution of nonlinear ill-posed problems. If is a Fréchet-differentiable operator, Bakushinskii established local convergence for the iteratively regularized Gauss - Newton technique if a solution of the operator equation (relative to the initial guess) satisfies a source-wise representation In (2) denotes a given perturbation of the exact data, which satisfies Assuming a source-wise representation (3) of the solution relative to the initial guess is in many applications inappropriate. Therefore the author studies convergence of Newton type methods without assuming source-wise representations of the solution. Assuming in (2) and convergence of the iteratively regularized Gauss - Newton technique for one finds that the limit satisfies i.e., is a critical point. The aim of the author is to prove convergence of Newton type methods to a solution of (1) (and not to a critical point), and therefore assumptions on the operator F have to be posed, which exclude (at least locally) that a critical point is not a solution of (2). In this book two conditions on the operator F are studied and The first condition depends on the actual value of in (3). If is Lipschitz continuous then locally (5) is more restrictive the smaller is. For , R = I, and , (4) is equivalent to Lipschitz continuity of . The second condition (6) is a Newton - Mysovskii condition as studied e.g. in the books by Kantorowitsch and Akilov [5] and Deuflhard and Hohmann [7] (see also the references quoted therein). The author studies in her book a class of Newton type methods defined by where for small is an approximation of and is an approximation of . As special cases of (7) the iteratively regularized Gauss - Newton technique and a Newton - Landweber method (introduced in this book) can be considered. The Newton - Landweber iteration is a method where the linear equations occurring in each Newton step are solved approximately with a Landweber iteration combined with an appropriate stopping criterion. A similar approach has been suggested recently by Hanke [2], who uses a conjugate gradient technique instead of the Landweber method for the inner iteration. The author studies extensively approximation properties and convergence results of the iterates of (7). In the case of measurement errors, stopping criteria are developed which stabilize the output of (7). The theory developed in this book applies to the particular inverse problem of reconstructing the diffusion parameter in a quasi-linear elliptic differential equation from transient measurements. [1]Bakushinskii A B 1992 The problem of the iteratively regularized Gauss - Newton method Comput. Math. Math. Phys. 32 1353 - 9[2]Hanke M 1997 Regularizing properties of a truncated Newton - CG algorithm for nonlinear inverse problems Preprint No 280 Universität Kaiserslautern[3]Kaltenbacher B 1998 A posteriori parameter choice strategies for some Newton type methods for the regularization of nonlinear ill-posed problems (submitted)[4]Kaltenbacher B 1997 Some Newton type methods for the solution of nonlinear ill-posed problems Inverse Problems 13 729 - 53[5]Kantorowitsch L W and Akilov G P 1964 Funktionalanalysis in Normierten Räumen (Berlin: Akademie)[6]Blaschke B, Neubauer A and Scherzer O 1997 On convergence rates for the iteratively regularized Gauss - Newton method IMA J. Numer. Anal. to appear[7]Deuflhard P and Hohmann A 1995 Numerical Analysis. A First Course in Scientific Computation (Berlin, New York: de Gruyter) O Scherzer Universität Linz
- Research Article
10
- 10.1002/(sici)1097-0363(19960730)23:2<177::aid-fld418>3.0.co;2-n
- Jul 30, 1996
- International Journal for Numerical Methods in Fluids
SUMMARY This paper addresses the resolution of non-linear problems arising from an implicit time discretization in CFD problems. We study the convergence of the Newton-GMRES algorithm with a Jacobian approximated by a finite difference scheme and with restarting in GMRES. In our numerical experiments we observe, as predicted by the theory, the impact of the matrix-free approximations. A second-order scheme clearly improves the convergence in the Newton process. Many scientific applications lead to a non-linear system of equations. We consider here the numerical simulation of steady state compressible flows. Implicit time discretizations allow us to use large time steps. On the other hand, at each time step a non-linear system of equations must be solved. Because of memory requirements, we want to use a so-called matrix-free algorithm. Several authors (see e.g. References 1 and 2) have considered inexact Newton methods where the Newton equations are solved approximately by an iterative solver. Moreover, since the Jacobian is required only through a matrixvector product, it can be approximated by a finite difference scheme.334 The resulting matrix-free algorithm, which we call Newton-MF-GMRES, has been studied there with no restarting in GMRES. Here we extend these results to GMRES with restarting, denoted GMRES(m), as designed in Reference 5. Global convergence of Newton can be enhanced by a line search backtracking procedure provided that the approximate solution given by the iterative solver is a descent direction.6 We give a sufficient condtion on the stopping criterion of GMRES(m) to guarantee this result. The quadratic local convergence of the basic Newton iterations is no longer achieved with the Newton-MF-GMREiS method. As in Reference 3, but in the context of restarting, we give here sufficient conditions on the stopping criterion and the approximation of the Jacobian to obtain a linear local convergence. We introduce a centred second-order difference quotient to approximate the Jacobian. This scheme is more expensive than the usual first- order difference quotient, but it is more accurate and leads to a better Newton convergence. We apply the Newton-MF-GMRES(m) algorithm to the numerical solution of the compressible Navier-Stokes equations. We present results for two steady state problems. We study in detail the convergence of Newton and GMRES for one implicit time step and also for the stationary non-linear
- Research Article
47
- 10.1137/s0895479897322999
- Jan 1, 1998
- SIAM Journal on Matrix Analysis and Applications
When Newton's method is applied to find the maximal symmetric solution of a discrete algebraic Riccati equation (DARE), convergence can be guaranteed under moderate conditions. In particular, the initial guess does not need to be close to the solution. The convergence is quadratic if the Fréchet derivative is invertible at the solution. When the closed-loop matrix has eigenvalues on the unit circle, the derivative at the solution is not invertible. The convergence of Newton's method is shown to be either quadratic or linear with the common ratio $\frac{1}{2}$, provided that the eigenvalues on the unit circle are all semisimple. The linear convergence appears to be dominant, and the efficiency of the Newton iteration can be improved significantly by applying a double Newton step at the right time.
- Conference Article
- 10.36334/modsim.2011.a4.jin
- Dec 12, 2011
Inverse problems arise whenever one searches for unknown causes based on observation of their effects. Such problems are usually ill-posed in the sense that their solutions do not depend contin- uously on the data. In practical applications, one never has the exact data; instead only noisy data are available due to errors in the measurements. Thus, the development of stable methods for solving inverse problems is an important topic. In the last two decades, many methods have been developed for solving nonlinear inverse problems. Due to their straightforward implementation and fast convergence property, more and more attention has been paid on Newton-type regularization methods including the general iteratively regularized Gaus-newton methods and the inexact Newton regularization methods. The iteratively regularized Gauss-Newton method was proposed by Bakushinski for solving nonlinear inverse problems in Hilbert spaces, and the method was quickly generalized to its general form. These methods produce all the iterates in some trust regions centered around the initial guess. The regularization property was explored under either a priori or a posteriori stopping rules. We will present our recent convergence results when the discrepancy principle is used to terminate the iteration. The inexact Newton regularization methods was initiated by Hanke and then generalized by Rieder to solve nonlinear inverse problems in Hilbert spaces. In contrast to the iteratively regularized Gauss- Newton methods, such methods produce the next iterate in a trust region centered around the current iterate by regularizing local linearized equations. An approximate solution is output by a discrepancy principle. Although numerical simulation indicates that they are quite efficient, for a long time it has been an open problem whether the inexact Newton methods are order optimal. We will report our recent work and confirm that the methods indeed are order optimal. In some situations, regularization methods formulated in Hilbert space setting may not produce good results since they tend to smooth the solutions and thus destroy the special feature in the exact solution. On the other hand, many inverse problems can be more naturally formulated in Banach spaces than in Hilbert spaces. Therefore, it is necessary to develop regularization methods in the framework of Banach spaces. By making use of duality mappings and Bregman distance we will indicate how to formulate some Newton-type methods in Banach space setting and present the corresponding convergence results.
- Research Article
6
- 10.1017/s0334270000007499
- Apr 1, 1995
- The Journal of the Australian Mathematical Society. Series B. Applied Mathematics
In this paper, an inexact Newton's method for nonlinear systems of equations is proposed. The method applies nonmonotone techniques and Newton's as well as inexact Newton's methods can be viewed as special cases of this new method. The method converges globally and quadratically. Some numerical experiments are reported for both standard test problems and an application in the computation of Hopf bifurcation points.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.