Surrogate-based-constrained optimization for some optimization problems involving computationally expensive objective functions and constraints is still a great challenge in the optimization field. Its difficulties are of two primary types. One is how to handle the constraints, especially, equality constraints; another is how to sample a good point to improve the prediction of the surrogates in the feasible region. Overcoming these difficulties requires a reliable constraint-handling method and an efficient infill-sampling strategy. To perform inequality- and equality-constrained optimization of expensive black-box systems, this work proposes a hybrid surrogate-based-constrained optimization method (HSBCO), and the main innovation is that a new constraint-handling method is proposed to map the feasible region into the origin of the Euclidean subspace. Thus, if the constraint violation of an infeasible solution is large, then it is far from the origin in the Euclidean subspace. Therefore, all constraints of the problem can be transformed into an equivalent equality constraint, and the distance between an infeasible point and the origin in the Euclidean subspace represents the constraint violation of the infeasible solution. Based on the distance, the objective function of the problem can be penalized by a Gaussian penalty function, and the original constrained optimization problem becomes an unconstrained optimization problem. Thus, the feasible solutions of the original minimization problem always have a lower objective function value than any infeasible solution in the penalized objective space. To improve the optimization performance, kriging-based efficient global optimization (EGO) is used to find a locally optimal solution in the first phase of HSBCO, and starting from this locally optimal solution, RBF-model-based global search and local search strategies are introduced to seek global optimal solutions. Such a hybrid optimization strategy can help the optimization process converge to the global optimal solution within a given maximum number of function evaluations, as demonstrated in the experimental results on 23 test problems. The method is shown to achieve the global optimum more closely and efficiently than other leading methods.
Read full abstract