Abstract

Nowadays, multimodal sentiment analysis has become a popular field, which combines traditional text, pictures, videos and other different modalities to perform sentiment analysis. Multi-modal sentiment analysis usually first obtains the emotional features of each modality, and then fuses the emotional features of different modalities. Among them, single-modal feature extraction is very important, and multi-modal information fusion is also very important. Then, our document applies a multiplicate feeling analytical model for image and word based on deep learning network. Our model is primarily separate into word feature fetch former, draft feature fetch shape, feature melt former, and we made feeling analysis in the end. In text feature extraction, we use an improved CNN model that removes the pooling layer to increase the attention mechanism; Image Feature Extraction We use a modified dense block method for image recognition feature extraction, and finally we use an attention mechanism to fuse modalities and perform sentiment analysis. Last, Comparison results with a lot of experiments that the IDFN model proposed in this paper is significantly improved compared to the traditional sentiment analysis method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call