Abstract

The relationship between the emotional components associated with images and text is a crucial way of multimodal emotion analysis. However, most of the present multimodel affective cognitive models simply associate the features of images and texts without thoroughly investigating their interactions, resulting in poor recognition. Therefore, a multimodel emotion cognition method based on multi-channel graphic interaction is proposed. Text context features are extracted, scene and image information is encoded, and useful features are obtained. Based on these results, the modal alignment module be applied to obtain information about affective regions and words, and then the cross-modal gating module be applied to combine the multimodel features. In addition, we tested extensively on three open datasets, achieving an accuracy of 0.8122 for the MSA-single dataset, 0.7307 for the MSA-MULTIPLE dataset, and 0.7159 for TumEmo. The results show that this method is effective for multimodal emotion detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.