Sparse General Matrix-Matrix Multiplication (SpGEMM) has played an important role in a number of applications. So far, many efficient algorithms have been proposed to improve the performance of SpGEMM on GPUs. However, the performance of each algorithm for matrices of different structures varies a lot. There is no algorithm that can achieve the optimal performance of SpGEMM computation on all matrices. In this article, we design a machine learning based approach for predicting the optimal SpGEMM algorithm on input matrices. By extracting features from input matrices, we utilize LightGBM and XGBoost to train different lightweight models. The models are capable of predicting the best performing algorithm with low inference overhead and high accuracy for the given input matrices. We also investigate the impact of tree depth on model accuracy and inference overhead. Our evaluation shows that an increase in tree depth leads to a corresponding increase in prediction accuracy, reaching a maximum of approximately 85%, while resulting in increased inference overhead of approximately 10 µs. Compared with the state-of-the-art algorithms on three GPU platforms, our method achieves better overall performance.