Abstract

Posting of visual data in the social network has now become a common trend. Mainly, users are posting selfies or facial images over the social media that depict various moods at different instances. This has attracted the attention of researchers to come up with facial expression mining from social media images. Aim of the present work is to improve the performance of emotion analysis in a more efficient way in terms of accuracy and reliability. Developing new strategies for carrying out emotion analysis on posts containing images in social media. In this work, a novel model has been presented that focuses on transformed features for the purpose. Six distinct sentimental emotion classes (labeled 0 through 5) are considered in this work. They are 0: Sad, 1: Fear, 2: Awful, 3: Happy, 4: Surprised, 5: Satisfied. This model consists of three major stages: Feature extraction, Feature selection, and Class labeling.•This work incorporates the use of 2D Ortho-normal Stockwell Transformation (DOST) method is used for feature extraction of facial images.•Following the feature extraction model, feature selection is implemented through ‘bi-variate t-test’.•Finally, these selected features are subjected to a AdaBoost based Random Forest classifier for Emotion Classification(ARFEC) for the purpose of class labeling towards different classes of expression. The Flickr8k, CK+ and FER2013 image databases are utilized for validating the efficiency of the developed ARFEC model. Analysis of results shows the effectiveness of ARFEC model with overall rates of accuracy of 89.5 %, 92.5 % and 89.5 % respectively for the databases taken. Performance of ARFEC model when compared with other existing methods such as Support Vector Machine and K-Nearest Neighbors yielded better results in terms of overall rate of accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call