Abstract

Hyperparameter optimization (HPO) is a compu-tationally expensive blackbox optimization problem to maximize the performance of a machine learning model by tuning the model hyperparameters. Conventionally, global search has been widely adopted rather than local search to address HPO. In this study, we investigate whether this conventional choice is reasonable by empirically comparing popular global and local search methods as applied to HPO problems. The numerical results demonstrate that local search methods consistently achieve results that are comparable to or better than those of the global search methods, i.e., local search is a more reasonable choice for HPO. We also report the findings of detailed analyses of the experimental data conducted to understand how each method functions and the objective landscapes of HPO.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call