Abstract

Engineering optimization problems often involve computationally expensive black-box simulations of underlying physical phenomena. This paper compares the performance of four constrained optimization algorithms relying on a Gaussian process model and an infill sampling criterion under the framework of Bayesian optimization. The four infill sampling criteria include expected feasible improvement (EFI), constrained expected improvement (CEI), stepwise uncertainty reduction (SUR), and augmented Lagrangian (AL). Numerical tests were rigorously performed on a benchmark set consisting of nine constrained optimization problems with features commonly found in engineering, as well as a constrained structural engineering design optimization problem. Based upon several measures including statistical analysis, our results suggest that, overall, the EFI and CEI algorithms are significantly more efficient and robust than the other two methods, in the sense of providing the most improvement within a very limited number of objective and constraint function evaluations, and also in the number of trials for which a feasible solution could be located.

Highlights

  • Nature-inspired optimization algorithms, such as swarm intelligence metaheuristics and evolutionary algorithms, have become increasingly popular in recent years for solving optimization problems in different domains, including, for instance, the firefly algorithm [1,2], crow search algorithm [3], hybrid gray wolf optimizer–crow search algorithm [4], elephant herding optimization and tree growth algorithm [5], to name a few

  • The performance of constrained Bayesian optimization will depend on the locations of the points in the initial sample

  • Except for problems G02 and G06, constrained expected improvement (CEI) found a first feasible solution slowest overall

Read more

Summary

Introduction

Nature-inspired optimization algorithms, such as swarm intelligence metaheuristics and evolutionary algorithms, have become increasingly popular in recent years for solving optimization problems in different domains, including, for instance, the firefly algorithm [1,2], crow search algorithm [3], hybrid gray wolf optimizer–crow search algorithm [4], elephant herding optimization and tree growth algorithm [5], to name a few. Where the prior mean function μ0 ( x ) reflects the expected function value at input x, and the covariance function k( x, x 0 ) models the dependency between the function values at two different input points x and x 0. Once the prior mean and kernel functions are chosen, we can draw function values at the sample points x1 , x2 , . Xn and obtain a posterior function value at any new input x in the domain D, conditional upon all previous observations. We can make a prediction at any new input x by drawing f ( x ) from the posterior distribution of the GP. Under the noise-free observations with a constant mean, the predictive distribution of f ( x ) at a point x ∈ D becomes [43]:

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call