Abstract
To address the issues in existing image-text aspect-level sentiment analysis methods, such as insufficient feature extraction from a single modality, neglecting the association between text and target words, and inadequate interaction between modalities, an image-text aspect-level sentiment analysis method based on the attention mechanism and bimodal fusion is proposed. The model fully realizes the interaction between aspect and text, and between aspect words and images through the self-attention mechanism and graph convolutional network, and then realizes the deep interaction and fusion of inter-modal information through the bimodal fusion mechanism, in order to enhance the precision of sentiment classification. The experimental findings demonstrate that the suggested ITASA-AMB achieves ACC and F1 values of 87.6% and 80.1% on the Twitter-2015 dataset; and 82.3% and 77.2% on the Twitter-2017 dataset. All of them are significantly enhanced compared to several other advanced multimodal sentiment analysis methods, which can enhance the accuracy of sentiment classification.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have