Abstract

More individuals actively express their opinions and attitudes in social media through advanced improvements such as visual content and text captions. Sentiment analysis for visuals such as images, video, and GIFs has become an emerging research trend in understanding social involvement and opinion prediction. Numerous individual researchers have obtained good progress in outcomes for text sentiment analysis and image sentiment analysis. The combination of image sentiment analysis with text caption analysis needs more research. This article presents a VGG Network-based Intermodal Sentiment Analysis Model (VGGNET-ISAM) for transferring the connection between texts to images. A mapping process is developed using the VGG Network for gathering the opinion information as numerical vectors. The Active Deep Learning (ADL) classifier is used for opinion prediction from the obtained information vectors. Simulation experiments are carried out to evaluate the proposed approach. The findings show that the model outperforms and gives better solutions with high accuracy, precision with low delay, and low error rate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call