Abstract

In design optimization and parameter identification objective, or response, function(s) are typically linked to the actually independent variables through equality constraints, which we will refer to as state equations. Our key assumption is that it is impossible to form and factor the corresponding constraint Jacobian, but one has instead some fixed point algorithm for computing a feasible state, given any reasonable value of the independent variables. The ultimate goal is to derive from a given state equation solver and a function for evaluating the objective(s) an iterative procedure that achieves primal and (some kind of) dual feasibility as well as optimality more or less simultaneously. Ideally, the cost should only amount to a handful of simulation runs to (re-)gain primal feasibility. So far we have concentrated on ways of obtaining dual feasibility in a piggyback fashion. By this we mean the simultaneous solution of the adjoint equations by an iterative procedure that is obtained by automatic differentiation from the state residual evaluation code. It is shown in particular that the Lagrange function exhibits superconvergence to the reduced function value.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.