Abstract

Graph neural networks(GNN) are a special variant of neural networks which help in dealing with unstructured data such as graph data. The advent of the GNN has helped in dealing with problems in different domains, especially in the domain of Natural Language Processing(NLP). In NLP, GNNs are used to implement tasks such as text classification which has a wide variety of applications. There are two ways to represent the text data using GNN namely, Inductive and transductive. In this paper, we apply the approach of the inductive model using different variants of GNN. We observed that the GAT variant gave better performance compared to other variants. Moreover, we observed that the complexity of the model and the dataset size influences the entropy of the output.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call