Deep visual data analysis from social network has become an increasingly important area of research. In fact, this form of assessment makes it viable to recognize new information on social users which incorporates emotions. In order to recognize users’ emotions and other latent attributes, most of the existing approaches have used textual data and have obtained accurate results. However, little attention has been paid to visual data that have become increasingly popular in recent years.This work describes how to develop a conceptual representation model for social network analysis and social emotion analysis based on the integration of fuzzy logic and ontological modeling. The primary aim is to create an ontology that can extract new information about a social user’s mood, which can be classified as panic, no-panic, or neutral. Fuzzy logic is necessary to deal with subjective data, as users share imprecise and vague data on their social profiles. Fuzzy logic has been observed as a successful method to capture the expression of emotions due to the fuzzy nature of emotions and the ambiguous definitions of emotion words. The proposed work investigate the role of fuzzy logic in social network analysis. This study simulate a fuzzy deep system integrated with ontology for classifying social visual data (shared images) into panic, no-panic or neutral classes in order to determine the social users’ stress intensity. The Social distancing and the huge amount of shared data in Tunisia were calculated to demonstrate this classification. The experiments performed in this paper aim to create not only a novel annotated visual database named visual panic database, but also a new semantic model for modeling users’ profiles, in social network, based on the combination between ontology and deep learning techniques. In this context, we will work in the future on the combination between a user’s visual and textual data in order to improve the performance of this recognition. The proposed fuzzy system reflected the viral proliferation in stressed users and achieved an accuracy of 87%.
Read full abstract