Abstract

Aspect-level sentiment classification aims to determine sentiment polarities of various aspects in reviews, where each review typically contains multiple aspects, that may correspond to different polarities. Aspect-level sentiment classification, unlike document-level sentiment classification, requires different context representations for different aspects. Existing methods normally use Long Short-Term Memory (LSTM) network to model aspects and contexts separately, and they combine attention mechanisms to extract features of a specific aspect in its context. Attention mechanisms are not used for sequence modeling, so aspects are not considered when generating context sequence representations. This study proposes a novel aspect-context interactive representation structure that only relies on an attention mechanism for generating sequence-to-sequence representations in both context and aspect. It is capable of extracting features related to the specific aspect during the process of its context sequence modeling, and generating a high quality aspect representation simultaneously. We have conducted comprehensive experiments to compare with thirteen existing methods. Our experimental results show that the proposed model is able to achieve significantly better performance on Restaurant dataset, as well as very competitive results on Laptop and Twitter datasets.

Highlights

  • Aspect-level sentiment classification aims to determine the reviewer’s sentiment tendency of a service or product, based on a given review text [1], which is a sub-task of more comprehensive sentiment analysis [2]

  • It is more valuable in many real-world applications, as its sentiment polarity focuses on each specific aspect, instead of a mixed documentlevel polarity, which usually lacks of detailed sentiment information for different aspects

  • 1) COMPARISONS WITH Long Short-Term Memory (LSTM)/ATTENTION-BASED LSTM Because MultiACIA considers aspect information during the process of sequence representation modeling and continuously extracts the features related to a specific aspect in its context using a multi-layer attention stacking structure, its accuracy and macro-F1 is much higher than those of LSTM

Read more

Summary

INTRODUCTION

Aspect-level sentiment classification aims to determine the reviewer’s sentiment tendency of a service or product, based on a given review text [1], which is a sub-task of more comprehensive sentiment analysis [2]. The weight differences generated by attention mechanism cause our VOLUME 8, 2020 model to focus on the features that are conducive to the specified aspect polarity discrimination (these differences exist in each word of the aspect phrases and its contexts) It works well for aspect representations, which can automatically identify the most representative words in the aspect phrases. The representation structure of Multi-layer AspectContext Interactive Attention (MultiACIA) can generate a sequence representation of contexts by focusing on those words associated with a specific aspect It is more suitable than LSTM for handling mixed information of multiple aspects in a context. On aspect-level sentiment classification, there are some methods that combine BERT with this task and obtain strong results [34]–[37] Their performances largely depend on BERT’s powerful language feature representation capabilities, and training large BERT model requires massive datasets, powerful GPU clusters and considerable training time. For a different aspect Ti in the same S, the model must to recognize the corresponding evaluation words in their context

ASPECT-CONTEXT INTERACTIVE ATTENTION
MULTIACIA REPRESENTATION
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call