Abstract

Hyperparameter Optimization (HPO) of Deep Learning (DL)-based models tends to be a compute resource intensive process as it usually requires to train the target model with many different hyperparameter configurations. We show that integrating model performance prediction with early stopping methods holds great potential to speed up the HPO process of deep learning models. Moreover, we propose a novel algorithm called Swift-Hyperband that can use either classical or quantum Support Vector Regression (SVR) for performance prediction and benefit from distributed High Performance Computing (HPC) environments. This algorithm is tested not only for the Machine-Learned Particle Flow (MLPF), model used in High-Energy Physics (HEP), but also for a wider range of target models from domains such as computer vision and natural language processing. Swift-Hyperband is shown to find comparable (or better) hyperparameters as well as using less computational resources in all test cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call