Abstract
Before the arrival of the web as a corpus, people detected positive and negative news based on the understanding of the textual content from physical newspaper rather than an automatic identification approach from readily available e-newspapers. Thus, the earlier sentiment analysis approach is based on unimodal data, and less effort is paid to the multimodal data. However, the presence of multimodal information helps us to get a clearer understanding of the sentiment. To the best of our knowledge, less work has been introduced on the image–text multimodal sentiment analysis framework of Assamese, a low-resource Indian language mostly spoken in the northeast part of India. We built an Assamese news articles dataset consisting of news text and associated images and one image caption to conduct an experimental study. Focusing on important words and discriminative regions of the images mostly related to sentiment, two individual unimodal such as textual and visual models are proposed. The visual model is developed using an encoder-decoder–based image caption generation system. An image–text multimodal approach is proposed to explore the internal correlation between textual and visual features for joint sentiment classification. Finally, we propose the multimodal sentiment analysis framework, i.e., Textual Visual Multimodal Fusion, by employing a late fusion scheme to merge the three different modalities for the final sentiment prediction. Experimental results conducted on the Assamese dataset built in-house demonstrate that the contextual integration of multimodal features delivers better performance than unimodal features.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Asian and Low-Resource Language Information Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.