A smooth penalty function method which does not involve dual and slack implicit variables to formulate constrained optimization conditions is proposed. New active and loss functions are devised to independently and adaptively handle the violation of each constraint function. A single unconstrained minimization of the proposed penalty function produces a solution to the original optimization problem. The constrained optimization conditions depending only on the primal variable are derived and reduce to the canonical Karush-Kuhn-Tucker(KKT) conditions in a much more general framework. The derivative of the proposed loss functions with respect to constraint function can be interpreted as the Lagrange multiplier of the traditional Lagrangian function. The appearance of the first-order Hessian information is unveiled. This is in contrast to the existing works where the classical Lagrange Hessian contains only second order derivatives. Finally, some numerical examples including medium-scale stress constrained topology optimization and scenario based reliability design problems are presented to demonstrate the effectiveness of the proposed methodology.
Read full abstract