In the field of text classification, current research ignores the role of part-of-speech features, and the multi-channel model that can learn richer text information compared to a single model. Moreover, the method based on neural network models to achieve final classification, using fully connected layer and Softmax layer can be further improved and optimized. This paper proposes a hybrid model for text classification using part-of-speech features, namely PAGNN-Stacking1. In the text representation stage of the model, introducing part-of-speech features facilitates a more accurate representation of text information. In the feature extraction stage of the model, using the multi-channel attention gated neural network model can fully learn the text information. In the text final classification stage of the model, this paper innovatively adopts Stacking algorithm to improve the fully connected layer and Softmax layer, which fuses five machine learning algorithms as base classifier and uses fully connected layer Softmax layer as meta classifier. The experiments on the IMDB, SST-2, and AG_News datasets show that the accuracy of the PAGNN-Stacking model is significantly improved compared to the benchmark models.