Abstract
The exponential growth of multimodal content in today's competitive business environment leads to a huge volume of unstructured data. Unstructured big data has no particular format or structure and can be in any form, such as text, audio, images, and video. In this paper, we address the challenges of emotion and sentiment modeling due to unstructured big data with different modalities. We first include an up-to-date review on emotion and sentiment modeling including the state-of-the-art techniques. We then propose a new architecture of multimodal emotion and sentiment modeling for big data. The proposed architecture consists of five essential modules: data collection module, multimodal data aggregation module, multimodal data feature extraction module, fusion and decision module, and application module. Novel feature extraction techniques called the divide-and-conquer principal component analysis (Div-ConPCA) and the divide-and-conquer linear discriminant analysis (Div-ConLDA) are proposed for the multimodal data feature extraction module in the architecture. The experiments on a multicore machine architecture are performed to validate the performance of the proposed techniques.
Highlights
In today’s competitive business environment, emotion and sentiment modeling techniques are important tools for businesses to have a measure of consumer feelings towards its services and products, and how it fares in relation to its competitors
This paper addresses the above challenges due to unstructured Big data with different modalities by proposing a new architecture for multimodal emotion and sentiment modeling from unstructured Big data, like the Internet and social networks
This paper presents two novel approaches of the divide and conquer feature extraction techniques for Big data analytics
Summary
In today’s competitive business environment, emotion and sentiment modeling techniques are important tools for businesses to have a measure of consumer feelings towards its services and products, and how it fares in relation to its competitors. This work used the AVEC2013 and AVEC2014 databases in conjunction with deep convolutional neutral networks (DCNN) to classify emotions into different ranges of depression severity Another recent paper by Kaya et al [63] proposed a video based emotion recognition using deep transfer learning and score fusion. Some works by Poria et al [10], [74] conducted investigations on multimodal data analysis of text, audio and video using different datasets (YouTube dataset, Multimodal Opinion Utterances Dataset (MOUD), IEMOCAP, International Survey of Emotion Antecedents and Reactions (ISEAR) and CK++) In their works, combinations of feature and decision level fusion techniques were used. A recent work in 2017 by Zhong et al [81] used the MAHNOB-HCI database with an AdaBoosted trees classifier and both feature and decision level fusion techniques were investigated Their results showed an accuracy of 69% and 72% for valence and arousal respectively. The authors in [24] showed that the ‘‘surprise’’ emotion is likely to indicate a positive impact towards the business activities and can be given a higher weight in the model
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.