Abstract

AbstractAspect-level multimodal sentiment analysis is the fine-grained sentiment analysis task of predicting the sentiment polarity of given aspects in multimodal data. Most existing multimodal sentiment analysis approaches focus on mining and fusing multimodal global features, overlooking the correlation of more fine-grained multimodal local features, which considerably limits the semantic relevance between different modalities. Therefore, a novel aspect-level multimodal sentiment analysis method based on global–local features fusion with co-attention (GLFFCA) is proposed to comprehensively explore multimodal associations from both global and local perspectives. Specially, an aspect-guided global co-attention module is designed to capture aspect-guided intra-modality global correlations. Meanwhile, a gated local co-attention module is introduced to capture the adaptive association alignment of multimodal local features. Following that, a global–local multimodal feature fusion module is constructed to integrate global–local multimodal features in a hierarchical manner. Extensive experiments on the Twitter-2015 dataset and Twitter-2017 dataset validate the effectiveness of the proposed method, which can achieve better aspect-level multimodal sentiment analysis performance compared with other related methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call