Abstract

Recognizing health status through human faces is a challenging research topic. Among them, facial recognition expressions can indirectly reflect the inner health status, which has very significant commercial and research value. Most research work on facial expression recognition uses traditional methods, and the accuracy of traditional methods highly depends on feature extraction. Deep learning has already promoted the research on facial expression recognition. This paper proposes a dual-branch network that uses global facial information and local information obtained by using the attention mechanism to merge and identify human facial emotional information. Use of shared pre-training module to extract low-level semantic information of global and also local images. The dual-branch network architecture utilizes the attention module to capture the relationship between different sub-images to fuse the local features of the face. Experimental results demonstrate that the accuracy of the CK+ Dataset reaches 95.96%, which is improved compared to other existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call