Abstract

Recent times have witnessed an exponential increase in multimedia specifically visual contents. Emotions are considered an essential part for extracting facial features, evaluating the expressions and as a result predicting the emotions of any person is a trending topic of the time. Based on still images and consecutive video frames, a methodology has been proposed to anticipate the emotions. Facial action coding system (FACS) standards are utilised in the development of an automated visual based emotion detection system worldwide. Employing FACS, the authors estimated facial muscle movement by computing 24 landmark points, 16 mutual distances between them and wrinkles caused due to changing expressions. Canny edge detection has been deployed to calculate the intensity of wrinkles. Geometric positions and optical flow are the key methods deployed in the implemented methodology. The methodology was evaluated on self-generated, JAFFE dataset and EmotioNet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.