Abstract

Aspect-level sentiment classification aims to identify the sentiment polarity of a review expressed toward a target. In recent years, neural network-based methods have achieved success in aspect-level sentiment classification, and these methods fall into two types: the first takes the target information into account for context modelling, and the second models the context without considering the target information. It is concluded that the former is better than the latter. However, most of the target-related models just focus on the impact of the target on context modelling, while ignoring the role of context in target modelling. In this study, we introduce an interactive neural network model named LT-T-TR, which divided a review into three parts: the left context with target phrase, the target phrase, and the right context with target phrase. And the interaction between the left/right context and the target phrase is utilized by an attention mechanism to learn the representations of the left/right context and the target phrase separately. As a result, the most important words in the left/right context or in the target phrase are captured, and the results on laptop and restaurant datasets demonstrate that our model outperforms the state-of-the-art methods.

Highlights

  • To further improve the representations of targets and contexts, we propose an interactive neural network model named LT-T-TR

  • It divides a review into three parts: the left context with the target phrase, the target phrase, and the right context with the target phrase. ree Bidirectional Long Short-Term Memory networks (BiLSTMs) are used to model these parts, respectively

  • Different words in reviews have different contributions to the final representation, and contexts and targets are influenced by each other, so attention weights of the target phrase and the left/right context are computed by interactive attention between the target phrase and the left/right context. e process is made up of two parts: the first is target-to-context attention, which includes the target-to-left context attention and the target-to-right context attention, to get better representations of the left/ right contexts; the second is context-to-target attention that includes the left context-to-target attention and the right context-to-target attention

Read more

Summary

Le context-to-target attention 4 Right context-to-target attention

For the left context LT, the input of Bi-LSTM is [vl, vl2, . . . , vls− 1] ∈ R(s− 1)×d and we get hidden states as follows:. After getting the hidden representations of the context and the target phrase generated by three Bi-LSTMs, we use the attention mechanism to calculate the different importance of words in the left/right context and the target phrase. Given the hidden representations of the left context [hl, hl2, . Hls− 1] and the average representation of target Tinitial, we first get the targetto-left context attention representation LTfinal by s− 1. Similar to equations (6)–(8), we can obtain the target-to-right context attention representation TRfinal using the average representation of the target Tinitial. En, through calculating the weighted combination of the hidden states of the target phrase, we can obtain the left context-to-target representation as follows: Tlftinal 􏽘 αlkthtk. Similar to equations (9)–(11), we can obtain the right context-to-target representation Trfitnal by using TRinitial and the hidden representations of the target. Where D means all training data, (S, T) means a reviewtarget pair, C is the number of categories of sentiment, P(y(S,T) c) is the probability of predicting (S, T) as class c given by the softmax function, and g(y(S,T) c) shows whether class c is the correct sentiment category

Experimental Settings
Method
Related Work
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.