Abstract

Aspect-level sentiment analysis aims at identifying the sentiment polarity of specific aspect words in a given sentence. Existing studies mostly use recurrent neural network (RNN) -based models. However, truncated backpropagation, gradient vanishing, and exploration problems often occur during the training process. To address these issues, this paper proposed a novel network with multiple attention mechanisms for aspect-level sentiment analysis. First, we apply the bidirectional encoder representations from transformers (BERT) model to construct word embedding vectors. Second, multiple attention mechanisms, including intra- and inter-level attention mechanisms, are used to generate hidden state representations of a sentence. In the intra-level attention mechanism, multi-head self-attention and point-wise feed-forward structures are designed. In the inter-level attention mechanism, global attention is used to capture the interactive information between context and aspect words. Furthermore, a feature focus attention mechanism is proposed to enhance sentiment identification. Finally, several classic aspect-level sentiment analysis datasets are used to evaluate the performance of our model. Experiments demonstrate that the proposed model can achieve state-of-the-art results compared to baseline models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call