Abstract

ABSTRACT An important aspect of machine learning (ML) involves controlling the learning process for the ML method in question to maximize its performance. Hyperparameter tuning (HPT) involves selecting suitable ML method parameters that control its learning process. Given that HPT can be conceptualized as a black box optimization problem subject to stochasticity, simulation optimization (SO) methods appear well suited to this purpose. Therefore, we conceptualize HPT as a discrete SO problem and demonstrate the use of the Kim and Nelson (KN) ranking and selection method, and the stochastic ruler (SR) and the adaptive hyperbox (AH) random search methods for HPT. We also construct the theoretical basis for applying the KN method. We demonstrate the application of the KN and the SR methods to a wide variety of machine learning models, including deep neural network models. We then successfully benchmark the KN, SR and the AH methods against multiple state-of-the-art HPT methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call