Abstract

This work discusses a novel approach, Bayesian-inspired multifidelity optimization, for solving complex optimization requiring computationally expensive simulations. Bayesian-inspired multifidelity optimization aims to minimize computational cost in achieving accurate high-fidelity results through the use of low-fidelity (computationally cheaper) models in combination with a surrogate correction model. A novel Bayesian hybrid bridge function was developed to serve as the low-fidelity correction technique. This Bayesian hybrid bridge function is a Bayesian weighted average of two standard bridge functions, additive and multiplicative. The correction technique is implemented in parallel with a modified trust region model management optimization scheme. It is shown that optimization on the corrected low-fidelity model converges to the same local optimum as optimization on the high-fidelity model in fewer high-fidelity model. This technique is developed to reach that optimum in fewer high-fidelity function evaluations than traditional optimization, ultimately reducing computational cost. This work also extends the low-fidelity correction optimization beyond the traditional bifidelity optimization to that of optimization with multiple-fidelity objective and constraint functions. The proposed solution technique allows for use of commercial optimizers and is demonstrated on three separate problems. The first demonstration is on a one-dimensional analytical test case in which the Bayesian correction is shown. Then, the ability to handle multiple fidelities in both the objective and constraint functions is presented and discussed using a two-dimensional analytical test case. An airfoil shape optimization problem is then used to illustrate the effectiveness of this approach in an engineering design scenario in which the various fidelities arise as a difference in computational physics. It is shown via these demonstrations that implementation of this Bayesian low-fidelity correction optimization approach results in convergence to a high-fidelity optimum at a reduced computational cost in comparison to traditional optimization techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call