Abstract

Aspect level sentiment classification aims to identify the sentiment polarity towards a particular aspect in a sentence. Previous attention-based methods generate an aspect-specific representation for each aspect and employ it to classify the sentiment polarity. However, normalized attention scores scatter over every word in the sentence, resulting in two issues. First, the attention may inherently introduce noise and downgrade the performance. Second, the opinion words may be “diluted” by other words, while the opinion feature should dominate for sentiment analysis. The issues become more severe in multi-aspect sentences. In this paper, we address the above two issues via hybrid regularizations, i.e., <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">aspect-level</i> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">task-level regularizations</i> . Concretely, the aspect-level regularizations constrain the attention weights to alleviate noise. Among them, orthogonal regularization is designed for multi-aspect sentences and sparse regularization is for single-aspect sentences. To extract sentiment-dominant features, task-level regularization is proposed by introducing an orthogonal auxiliary task, i.e., aspect category detection. This regularization can allocate task-oriented context information for specific downstream tasks. Extensive experimental results on three public datasets demonstrate the effectiveness of the proposed approach in both single-task and multi-task scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call