Abstract

Text classification is an essential and classical problem in natural language processing. Traditional text classifiers often rely on many human-designed features. With the rise of deep learning, Recurrent Neural Networks and Convolutional Neural Networks have widely applied into text classification. Meanwhile, the success of Graph Neural Networks (GNN) on structural data has attracted many researchers to apply GNN to traditional NLP applications. However, when these methods use the GNN, they commonly ignore the word order information of the sentence. In this work, we propose a model that uses a recurrent structure to capture contextual information as far as possible when learning word representations, which keeps word orders information compared to GNN-based networks. Then, we use the idea of GNN's message passing to aggregate the contextual information and update the word hidden representation. Like GNN's readout operation, we employ a max-pooling layer that automatically judges which words play key roles in text classification to capture the critical components in texts. We conduct experiments on four widely used datasets, and the experimental results show that our model achieves significant improvements against RNN-based model and GNN-based model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call