Abstract

A cognitive-analysis of facial features can make facial expression recognition system more robust and efficient for Human-Machine Interaction (HMI) applications. Through this work, we propose a new methodology to improve accuracy of facial expression recognition system even with the constraints like partial hidden faces or occlusions for real time applications. As a first step, seven independent facial segments: Full-Face, half-face (left/right), upper half face, lower half face, eyes, mouth and nose are considered to recognize facial expression. Unlike the work reported in literature, where arbitrarily generated patch type occlusions on facial regions are used, in this work a detailed analysis of each facial feature is explored. Using the results thus obtained, these seven sub models are combined using a Stacked Generalized ensemble method with deep neural network as meta-learner to improve accuracy of facial expression recognition system even in occluded state. The accuracy of the proposed model improved up to 30% compared to individual model accuracies for cross-corpus seven model datasets. The proposed system uses CNN with RPA compliance and is also configured on Raspberry Pi, which can be used for HRI and Industry 4.0 applications which involve face occlusion and partially hidden face challenges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call