Abstract

Aspect category sentiment analysis (ACSA) is a subtask of aspect based sentiment analysis (ABSA). It aims to identify sentiment polarities of predefined aspect categories in a sentence. ACSA has received significant attention in recent years for the vast amount of online reviews toward the target. Existing methods mainly make use of the emerging architecture like LSTM, CNN and the attention mechanism to focus on the informative sentence spans towards the aspect category. However, they do not pay much attention to the fusion of the aspect category and the corresponding sentence, which is important for the ACSA task. In this paper, we focus on the deep fusion of the aspect category and the corresponding sentence to improve the performance of sentiment classification. A novel model, named Self-Attention Fusion Networks (SAFN) is proposed. First, the multi-head self-attention mechanism is utilized to obtain the sentence and the aspect category attention feature representation separately. Then, the multi-head attention mechanism is used again to fuse these two attention feature representations deeply. Finally, a convolutional layer is applied to extract informative features. We conduct experiments on a dataset in Chinese which is collected from an online automotive product forum, and a public dataset in English, Laptop-2015 from SemEval 2015 Task 12. The experimental results demonstrate that our model achieves higher effectiveness and efficiency with substantial improvement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.