Abstract
Although significant progress has been made in sentiment analysis tasks based on image–text data, existing methods still have limitations in capturing cross-modal correlations and detailed information. To address these issues, we propose a Multi-Granularity Attention Fusion Network for Implicit Sentiment Analysis (MGAFN-ISA). MGAFN-ISA that leverages neural networks and attention mechanisms to effectively reduce noise interference between different modalities and captures distinct, fine-grained visual and textual features. The model includes two key feature extraction modules: a multi-scale attention fusion-based visual feature extractor and a hierarchical attention mechanism-based textual feature extractor, each designed to extract detailed and discriminative visual and textual representations. Additionally, we introduce an image translator engine to produce accurate and detailed image descriptions, further narrowing the semantic gap between the visual and textual modalities. A bidirectional cross-attention mechanism is also incorporated to utilize correlations between fine-grained local regions across modalities, extracting complementary information from heterogeneous visual and textual data. Finally, we designed an adaptive multimodal classification module that dynamically adjusts the contribution of each modality through an adaptive gating mechanism. Extensive experimental results demonstrate that MGAFN-ISA achieves a significant performance improvement over nine state-of-the-art methods across multiple public datasets, validating the effectiveness and advancement of our proposed approach.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have