Abstract

The text classification task is the most basic and essential in natural language processing. It can find the rules about documents and distinguish the categories of different papers. Nowadays, graph convolutional neural network is widely used in text classification task because of its scalability and high efficiency. However, the GCN model has the best effect at layer 2. As the number of network layers increases, problems such as over smoothing will occur. This paper proposes a semi-supervised text classification model based on a graph attention neural network, which makes full use of the characteristics of information transmission along with the graph by adding residual connections to deepen the number of network layers. Simultaneously, the attention mechanism can be used to give different weights according to the importance of neighboring nodes to target nodes. A dual-layer attention mechanism at the topic level and node level is proposed to learn the importance of different neighboring nodes and extra neighboring node information to target nodes. Experimental results show that this model is superior to other methods on four benchmark datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.