Abstract
With the development of social media, users increasingly tend to express their sentiments (broadly including sentiment polarities, emotions and sarcasm, etc.) associated with fine-grained aspects (e.g., entities) in multimodal content (mostly encompassing images and texts). Consequently, automated recognition of sentiments within multimodal content over different aspects, namely Multimodal Aspect-Based Sentiment Analysis (MABSA), has recently become an emergent research area. This paper assesses the state-of-the-art methods in MABSA based on a systematic taxonomy over different subtasks of MABSA. It compiles advanced models for each task and offers a concise overview of popular datasets and evaluation standards. Finally, we discuss the limitations of current research and highlight promising future research directions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.