Abstract

Most traditional expression classification systems track facial component regions such as eyes, eyebrows, and mouth for feature extraction. This paper utilized facial components to locate dynamic facial textures such as frown lines, nose wrinkle patterns, and nasolabial folds to classify facial expressions. Adaboost using Haar-like feature and Active Shape Model (ASM) are adopted to accurately detect face and acquire important facial feature regions. Gabor filter and Laplacian of Gaussian are used to extract texture information in the acquired feature regions. These texture feature vectors represent the changes of facial texture from one expression to another expression. Support Vector Machine is deployed to classify the six facial expression types including neutral, happiness, surprise, anger, disgust, and fear. Cohn-Kanade database was used to test the feasibility of proposed method and the average recognition rate reached 91.7%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.