Abstract

This study compares the performance of graph convolutional neural network (GCN) models with conventional natural language processing (NLP) models for classifying scientific literature related to radio frequency electromagnetic field (RF-EMF). Specifically, the study examines two GCN models: BertGCN and the citation-based GCN. The study concludes that the model achieves consistently good performance when the input text is long enough, based on the attention mechanism of BERT. When the input sequence is short, the composition parameter λ, which combines output values of the two subnetworks of BertGCN, plays a crucial role in achieving high classification accuracy. As the value of λ increases, the classification accuracy also increases. The study also proposes and tests a simplified variant of BertGCN, revealing performance differences among the models under two different data conditions by the existence of keywords. This study has two main contributions: (1) the implementation and testing of a variant of BertGCN and citation-based GCN for document classification tasks related to radio frequency electromagnetic fields publications, and (2) the confirmation of the impact of model conditions, such as the existence of keywords and input sequence length, in the original BertGCN. Although this study focused on a specific domain, our approaches have broader implications that extend beyond scientific publications to general text classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call