Abstract

Fine-grained sentiment polarity classification for short texts has been an important and challenging task in natural language processing until these years. The short texts may contain multiple aspect-terms, opinion terms expressing different sentiments for different aspect-terms. The polarity of the whole sentence is highly correlated with the aspect-terms and opinion terms. Besides, there are two challenges, which are how to effectively use the contextual information and the semantic features, and how to model the correlations between aspect-terms and context words including opinion terms. To solve these problems, a Self-Attention-Based BiLSTM model with aspect-term information is proposed for the fine-grained sentiment polarity classification for short texts. The proposed model can effectively use contextual information and semantic features, and especially model the correlations between aspect-terms and context words. The model mainly consists of a word-encode layer, a BiLSTM layer, a self-attention layer and a softmax layer. Among them, the BiLSTM layer sums up the information from two opposite directions of a sentence through two independent LSTMs. The self-attention layer captures the more important parts of a sentence when different aspect-terms are input. Between the BiLSTM layer and the self-attention layer, the hidden vector and the aspect-term vector are fused by adding, which reduces the computational complexity caused by the vector splicing directly. The experiments on public Restaurant and Laptop corpus from the SemEval 2014 Task 4, and Twitter corpus from the ACL 14. The Friedman and Nemenyi tests are used in the comparison study. Compared with existing methods, experimental results demonstrate that the proposed model is feasible and efficient.

Highlights

  • The task of sentiment polarity classification is regarded as opinion mining [1]

  • Compared with the standard LSTM model, the bidirectional LSTM (BiLSTM) model improves the accuracy of judging sentiment polarity

  • By incorporating self-attention mechanism it can capture the important information about aspect-term in the sentence

Read more

Summary

INTRODUCTION

The task of sentiment polarity classification is regarded as opinion mining [1]. Xie et al.: Self-Attention-Based BiLSTM Model for Short Text Fine-Grained Sentiment Classification model and Target-Connection LSTM model [10] Those models only capture the historical information about sentences and cannot make full use of the contextual information, so that each word cannot achieve the more precise semantic information, and cannot find the highlight words that have the greater influence on the sentiment polarity in sentences. The sentiment polarity of each aspect-term is identified in the short texts For this task, some effective models have been proposed, there still exist some problems as how to effectively use contextual information and semantic features and how to model the correlations between aspect-terms and context words. Self-Attention-Based BiLSTM model with aspect-term information is proposed for the fine-grained sentiment polarity classification. The word embedding matrix is regarded as the input for the BiLSTM neutral network model aiming to encode a sentence

BILSTM LAYER
ATTENTION LAYER
MODEL TRAINING
DATASET The experiments were conducted on three datasets
EVALUATION METRIC
COMPARISON WITH DIFFERENT METHODS
STATISTICAL TESTS IN DIFFERENT METHODS
Findings
CONCLUSION AND FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.