Abstract

Training and evaluating the performance of many competing Artificial Intelligence (AI)/Machine Learning (ML) models can be very time-consuming and expensive. Furthermore, the costs associated with this hyperparameter optimization task grow exponentially when cross validation is used during the model selection process. Finding ways of quickly identifying high-performing models when conducting hyperparameter optimization with cross validation is hence an important problem in AI/ML research. Among the proposed methods of accelerating hyperparameter optimization, successive halving has emerged as a popular, state-of-the-art early stopping algorithm. Concurrently, recent work on cross validation has yielded a greedy cross validation algorithm that prioritizes the most promising candidate AI/ML models during the early stages of the model selection process. The current paper proposes a greedy successive halving algorithm in which greedy cross validation is integrated into successive halving. An extensive series of experiments is then conducted to evaluate the comparative performance of the proposed greedy successive halving algorithm. The results show that the quality of the AI/ML models selected by the greedy successive halving algorithm is statistically identical to those selected by standard successive halving, but that greedy successive halving is typically more than 3.5 times faster than standard successive halving.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.