Abstract

Nonfrontal facial expression recognition in the wild is the key for artificial intelligence and human-computer interaction. However, it is easy to be disturbed when changing head pose. Therefore, this paper presents a face rebuilding method to solve this problem based on PRNet, which can build 3D frontal face for 2D head photo with any pose. However, expression is still difficult to be recognized, because facial features weakened after frontalization, which had been widely reported by previous studies. It can be proved that all muscle parameters in frontalization face are more weakened than those of real face, except muscle moving direction on each small area. Thus, this paper also designed muscle movement rebuilding and intensifying method, and through 3D face contours and Fréchet distance, muscular moving directions on each muscle area are extracted and muscle movement is strengthened following these moving directions to intensify the whole face expression. Through this way, nonfrontal facial expression can be recognized effectively.

Highlights

  • Nonfrontal facial expression recognition (FER) in the wild is very important for artificial intelligence (AI), and it is the key for human-computer interaction (HCI) [1]

  • For every face in the 2D photo, Position Map Regression Network (PRNet) could build special 3D face for each of them. e frontalization faces can be gotten from these 3D faces, and muscle movement can be extracted from these 3D faces

  • For testing this expression recognition method effectively, the library SFEW2.0 is used. is library concludes a great deal of photos taken from many famous films; human faces in these photos are taken in the wild and their heads poses are varied. e common FER methods might be disturbed seriously by this library, and recognition rates of traditional methods are only 60∼70% in the wild

Read more

Summary

Introduction

Nonfrontal facial expression recognition (FER) in the wild is very important for artificial intelligence (AI), and it is the key for human-computer interaction (HCI) [1]. E nonfrontal faces are caused by head turning and pitching and camera viewpoints changing, and these would be easy to deform face shape significantly and cause FER errors [5]. The important features for expression recognition, including the appearance and position of brows, eyes, cheeks, and mouth, are much different from those of front face and cause FER error seriously. Especially when head is turning, some face parts cannot be illuminated enough and might be dark and blurry in the photos; FER error may be caused [7]. Because geometrical relationship among these key points on deformed face image might be deformed and might be much different from that of front face, the traditional FER methods would be disturbed seriously [8]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.