Abstract

End-to-end aspect-based sentiment analysis (EASA) consists of two sub-tasks: the first extracts the aspect terms in a sentence and the second predicts the sentiment polarities for such terms. For EASA, compared to pipeline and multi-task approaches, joint aspect extraction and sentiment analysis provides a one-step solution to predict both aspect terms and their sentiment polarities through a single decoding process, which avoid the mismatches in between the results of aspect terms and sentiment polarities, as well as error propagation. Previous studies, especially recent ones, for this task focus on using powerful encoders (e.g., Bi-LSTM and BERT) to model contextual information from the input, with limited efforts paid to using advanced neural architectures (such as attentions and graph convolutional networks) or leveraging extra knowledge (such as syntactic information). To extend such efforts, in this paper, we propose directional graph convolutional networks (D-GCN) to jointly perform aspect extraction and sentiment analysis with encoding syntactic information, where dependency among words are integrated in our model to enhance its ability of representing input sentences and help EASA accordingly. Experimental results on three benchmark datasets demonstrate the effectiveness of our approach, where D-GCN achieves state-of-the-art performance on all datasets.

Highlights

  • End-to-end aspect-based sentiment analysis (EASA) aims to extract aspect terms in the text and predict their sentiment polarities so as to understand targeted sentiment towards particular objects

  • directional graph convolutional networks (D-graph convolutional networks (GCN)) works well with both base and large BERT, where consistent improvement is observed over the baselines across datasets

  • One possible explanation could be that we only model the contextual features directly linked to a specific word in each D-GCN layer, contextual information in the larger range can be leveraged indirectly across layers when the number of D-GCN layers increases, so that EASA performance is improved

Read more

Summary

Introduction

End-to-end aspect-based sentiment analysis (EASA) aims to extract aspect terms in the text and predict their sentiment polarities so as to understand targeted sentiment towards particular objects. These studies mainly rely on powerful encoders (e.g., Bi-LSTM, CNN, BERT) (Zhang et al, 2015; Ma et al, 2018; Schmitt et al, 2018; Li et al, 2019a; Li et al, 2019b; Luo et al, 2019; He et al, 2019; Hu et al, 2019) and pre-trained embedings (e.g., GloVe, word2vec, FastText) (Schmitt et al, 2018; Li et al, 2019a) to learn contextual information, with limited effort paid to leveraging advanced architectures and extra knowledge for this task To extend such effort, graph convolutional networks (GCN) was proposed and shows its effectiveness in conventional sentiment analysis (Zhang et al, 2019; Sun et al, 2019), as well as other tasks, e.g., text classification (Kipf and Welling, 2016), neural machine translation (Bastings et al, 2017), semantic role labeling (Marcheggiani and Titov, 2017), etc. To illustrate the effectiveness of our approach, experiments are performed on three benchmark datasets, where the results confirm that D-GCN is an appropriate model in leveraging dependency-based word relations for EASA, with state-of-theart performance observed on all datasets

The Approach
Graph Convolutional Networks
Directional Graph Convolutional Networks
Tagging with Directional Graph Convolutional Networks
Settings
Results
Ablation Study
Baseline
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.