Abstract

In order to ensure the learning accuracy of machine learning algorithm and solve the problem of parameter adjustment in training model, a Xgboost parameter adjustment strategy based on grid search and k-fold cross validation is proposed, which improves the learning efficiency compared with the traditional grid search; based on the Bayesian network probability model, the Bayesian Optimization Xgboost model is built, which greatly reduces the time that is required for training the model and improves the performance on classification accuracy compared with traditional Xgboost model. In the simulation experiment, the training results of Xgboost model using two different parameter adjustment methods are compared. Xgboost based on Bayesian Optimization performs better than Xgboost using grid search and k-fold cross validation on both training accuracy and efficiency. By comparing the training results of different models, the optimal model is obtained. Feature importance of optimal model is analyzed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call