Abstract

Self-attention mechanisms in deep neural networks, such as CNN, GRU and LSTM, have been proven to be effective for sentiment analysis. However, existing attention models tend to focus on individual tokens or aspect meanings in an expression. If a text contains information on multiple sentiments from different perspectives, the existing models will fail to extract the most critical and comprehensive features of the whole text. In the present study, a multiview attention model was proposed for learning sentence representation. Instead of using a single attention, multiple view vectors were used to map the attentions from different perspectives. Then, a fusion gate was adopted to combine these multiview attentions to draw a conclusion. To ensure the differences between multiview attentions, a regularization item was introduced to add a penalty to the loss function. In addition, the proposed model can be extended to other text tasks, such as questions and topics, to provide a comprehensive representation for the classification. Comparative experiments were conducted on both multiclass and multilabel classification datasets. The results revealed that the proposed method improves the performance of several previously proposed attention models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.