Abstract

Facial expressions serve as crucial indicators of an individual's psychological state, playing a pivotal role in face-to-face communication. This research focuses on advancing collaboration between machines and humans by undertaking a thorough investigation into facial expressions. Specifically, we delve into the analysis of emotional variations related to changes in skin tone across different genders and cultural backgrounds (Black and white). The research methodology is structured across three phases. In Phase I, image data is acquired and meticulously processed from the Chicago face dataset, resulting in 12,402 augmented images across five classes (Normal case, Benign case, Adenocarcinoma, Squamous-cell-carcinoma, Large-cell-carcinoma). Phase II involves the identification of Regions of Interest (ROI) and the extraction of RGB values as features from these ROIs. Various methods, including those proposed by Kovac, Swift, and Saleh, are employed for precise skin identification. The final phase, Phase III, centers on the in-depth analysis of emotions and presents the research findings. Statistical techniques, such as Descriptive statistics, independent sample T-tests for gender and cross-cultural comparisons, and two-way ANOVA, are applied to RED, BLUE, and GREEN pixel values as response variables, with gender and emotions as explanatory variables. The rejection of null hypotheses prompts a Post Hoc test to discern significant pairs of means. The results indicate that both cross-cultural backgrounds and gender significantly influence pixel colors, underscoring the impact of different localities on pixel coloration. Across various expressions, our results exhibit a minimal 0.05% error rate in all classifications. Notably, the study reveals that green pixel color does not exhibit a significant difference between Anger and Neutral emotions, suggesting a near-identical appearance for green pixels in these emotional states. These findings contribute to a nuanced understanding of the intricate relationship between facial expressions, gender, and cultural backgrounds, providing valuable insights for future research in human–machine interaction and emotion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call