Abstract

Due to various applications, research on personal traits using information on social media has become an important area. In this paper, a new method for the classification of behavior-oriented social images uploaded on various social media platforms is presented. The proposed method introduces a multimodality concept using skin of different parts of human body and background information, such as indoor and outdoor environments. For each image, the proposed method detects skin candidate components based on R, G, B color spaces and entropy features. The iterative mutual nearest neighbor approach is proposed to detect accurate skin candidate components, which result in foreground components. Next, the proposed method detects the remaining part (other than skin components) as background components based on structure tensor of R, G, B color spaces, and Maximally Stable Extremal Regions (MSER ) concept in the wavelet domain. We then explore Hanman Transform for extracting context features from foreground and background components through clustering and fusion operation. These features are then fed to an SVM classifier for the classification of behavior-oriented images. Comprehensive experiments on 10-class datasets of Normal Behavior-Oriented Social media Image (NBSI) and Abnormal Behavior-Oriented Social media Image (ABSI) show that the proposed method is effective and outperforms the existing methods in terms of average classification rate. Also, the results on the benchmark dataset of five classes of personality traits and two classes of emotions of different facial expressions (FERPlus dataset) demonstrated the robustness of the proposed method over the existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call