Abstract
In traditional multi-modal sentiment analysis, feature fusion is usually achieved by simple splicing, and multi-modal sentiment analysis is only trained as a single task, without considering the contribution of inter-modal information interaction to sentiment analysis and the correlation and constraint relationship between multi-modal and single-modal (text, video and audio) tasks. Therefore, a multi-task model based on interactive attention mechanism is proposed in this paper, which uses inter-modal attention mechanism and single-modal self-attention mechanism to train multi-modal sentiment analysis and single-modal sentiment analysis together, so as to make full use of inter-modal and inter-task information sharing, mutual complement, and reduce noise to improve the overall recognition performance. Experiments show that the proposed model performs well on MOSI and MOSEI common data sets for multimodal sentiment analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.