Abstract

Hyperparameters are the foundation for optimizing the way machine learning algorithms supposed to learn. It is essential to have the optimal hyperparameter values for any learning algorithms. However, tuning hyperparameters can be difficult because it requires a rule of thumb and how to adjust them appropriately. This paper compares hyperparameter tuning approaches in various machine learning algorithms. Simulation results discussed six machine learning algorithms deployment, which is: Decision Tree, Gaussian Naive Bayes, Random Forest, LightGBM, Catboost, and XGBoost. For each mentioned algorithm, six hyperparameter tuning methods (random search, grid search, Bayesian Optimization, Genetic Algorithm, SHERPA, and Optuna) are embedded to evaluate their efficiency. For each test, the value range that is specified are same throughout the algorithm. The advantages of having this method are that the time taken to find the best hyperparameter value is more controlled. The objective of this research is to learn the necessity of hyperparameter tuning, and if that is the case, what hyperparameter tuning methods is the best for each algorithm. The result of this research is that Random Forest without the aid of hyperparameter tuning achieved the best performance. The Future research will be able to forgo evaluating all hyperparameter tuning methods simultaneously after identifying the most efficient way for six machine learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call