Abstract
We propose a sequential homotopy method for the solution of mathematical programming problems formulated in abstract Hilbert spaces under the Guignard constraint qualification. The method is equivalent to performing projected backward Euler timestepping on a projected gradient/antigradient flow of the augmented Lagrangian. The projected backward Euler equations can be interpreted as the necessary optimality conditions of a primal-dual proximal regularization of the original problem. The regularized problems are always feasible, satisfy a strong constraint qualification guaranteeing uniqueness of Lagrange multipliers, yield unique primal solutions provided that the stepsize is sufficiently small, and can be solved by a continuation in the stepsize. We show that equilibria of the projected gradient/antigradient flow and critical points of the optimization problem are identical, provide sufficient conditions for the existence of global flow solutions, and show that critical points with emanating descent curves cannot be asymptotically stable equilibria of the projected gradient/antigradient flow, practically eradicating convergence to saddle points and maxima. The sequential homotopy method can be used to globalize any locally convergent optimization method that can be used in a homotopy framework. We demonstrate its efficiency for a class of highly nonlinear and badly conditioned control constrained elliptic optimal control problems with a semismooth Newton approach for the regularized subproblems.
Highlights
Let X and Y be real Hilbert spaces and C ⊆ X a nonempty closed convex set
This setting naturally comprises finite dimensional problems of the form min φ(x) subject to xl ≤ x ≤ xu and c(x) = 0 x ∈Rn with X = Rn, Y = Rm, and C = {x ∈ Rn | xl ≤ x ≤ xu}, where some components of xu and xl may take on values of ±∞. Another popular example is partial differential equation (PDE) constrained optimization, where X = U × Q is a product of the state and control space, C encodes pointwise constraints on the controls, and c(x) = c((u, q)) = 0 is the PDE constraint, where we often assume that the state u ∈ U is locally uniquely determined by the control q ∈ Q as an implicit function u(q) via c((u(q), q)) = 0
The application of a projected backward Euler method on the projected gradient/antigradient flow results in a sequential homotopy method, which we describe in Sect
Summary
Let X and Y be real Hilbert spaces and C ⊆ X a nonempty closed convex set. Let the nonlinear objective function φ : X → R and the nonlinear constraint function c : X → Y be twice continuously Fréchet differentiable. This setting naturally comprises finite dimensional problems ( known as Nonlinear Programming Problems, NLPs) of the form min φ(x) subject to xl ≤ x ≤ xu and c(x) = 0 x ∈Rn with X = Rn, Y = Rm, and C = {x ∈ Rn | xl ≤ x ≤ xu}, where some components of xu and xl may take on values of ±∞ Another popular example is partial differential equation (PDE) constrained optimization, where X = U × Q is a product of the state and control space, C encodes pointwise constraints on the controls, and c(x) = c((u, q)) = 0 is the PDE constraint, where we often assume that the state u ∈ U is locally uniquely determined by the control q ∈ Q as an implicit function u(q) via c((u(q), q)) = 0
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.