Abstract

Machine learning (ML) algorithms are techniques that allow computers to learn from the data without being explicitly programmed. ML techniques consist of hyperparameters that typically influence prediction accuracy, hence requiring tuning. In this study, we systematically evaluate the performance of the genetic algorithm (GA) technique in tuning ML hyperparameters compared to three other common tuning techniques i.e. grid search (GS), random search (RS), and bayesian optimization (BO). While previous studies explored the potential of metaheuristics techniques such as GA in tuning ML models, a systematic comparison with other commonly mentioned techniques is currently lacking. Results indicate that GA slightly outperformed other methods in terms of optimality due to its ability to pick any continuous value within the range. However, apart from GS which took the longest, it was observed that GA is quite a time inefficient compared to RS and BO which were able to find a solution close to the GA within a shorter time (GA – 149 minutes, RS – 88 minutes, BO – 105 minutes, GS – 756 minutes).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call