Abstract
Efficient global optimization (EGO) is one of the most popular sequential adaptive design (SAD) methods for expensive black-box optimization problems. A well-recognized weakness of the original EGO in complex computer experiments is that it is serial, and hence the modern parallel computing techniques cannot be utilized to speed up the running of simulator experiments. For those multiple points EGO methods, the heavy computation and points clustering are the obstacles. In this work, a novel batch SAD method, named “Accelerated EGO”, is forwarded by using a refined sampling/importance resampling (SIR) method to search the points with large expected improvement (EI) values. The computation burden of the new method is much lighter, and the points clustering is also avoided. The efficiency of the proposed batch SAD is validated by nine classic test functions with dimension from 2 to 12. The empirical results show that the proposed algorithm indeed can parallelize original EGO, and gain much improvement compared against the other parallel EGO algorithm especially under high-dimensional case. Additionally, the new method is applied to the hyper-parameter tuning of support vector machine (SVM) and XGBoost models in machine learning. Accelerated EGO obtains comparable cross validation accuracy with other methods and the CPU time can be reduced a lot due to the parallel computation and sampling method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.