Abstract
Text classification is an essential and classical problem in natural language processing. Traditional text classifiers often rely on many human-designed features. With the rise of deep learning, Recurrent Neural Networks and Convolutional Neural Networks have widely applied into text classification. Meanwhile, the success of Graph Neural Networks (GNN) on structural data has attracted many researchers to apply GNN to traditional NLP applications. However, when these methods use the GNN, they commonly ignore the word order information of the sentence. In this work, we propose a model that uses a recurrent structure to capture contextual information as far as possible when learning word representations, which keeps word orders information compared to GNN-based networks. Then, we use the idea of GNN's message passing to aggregate the contextual information and update the word hidden representation. Like GNN's readout operation, we employ a max-pooling layer that automatically judges which words play key roles in text classification to capture the critical components in texts. We conduct experiments on four widely used datasets, and the experimental results show that our model achieves significant improvements against RNN-based model and GNN-based model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.