Abstract

In last half decade an increasing number of works published has manifested the tremendous progress in multimodal sentiment analysis. In real-life communication, people are spontaneously modulating their tone to accentuate specific points or to express their sentiments. This research work introduces a supervised fuzzy rule-based system for multimodal sentiment classification, that can identify the sentiment expressed in video reviews on social media platform. It has been demonstrated that multimodal sentiment analysis can be effectively performed by the joint use of linguistic and acoustic modalities. In this paper computation of the sentiment using an ingenious set of fuzzy rules has been applied to label the review into: positive or negative sentiment. The confidence score from supervised Support Vector Machine (SVM) classification of text and speech cues is considered as the input variable for the fuzzy rules. The fusion of fuzzy logic with acoustic and linguistic features for classifying sentiment contributes a new exemplar in multimodal sentiment analysis. Our fuzzy approach has been compared with eight state-of-the-art techniques for supervised machine learning. The experiments on benchmark datasets yield 82.5 % accuracy for our approach which is higher in contrast to the state-of-the-art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.