Abstract

With facial expressions, humans can interconnect and relay information and feelings between one another. Recognizing emotions involves the four key stages namely facial expression recognition, preprocessing, feature extraction and classification. The facial images have more information than necessary; the images’ background noise can also impact automated expression recognition. To resolve unnecessary information and background noise, filtering and edge detection algorithms were used during the preprocessing phase. The edge detectors were used in facial expression recognition to highlight frequent facial components, locating sharp discontinuities and filtering less important data. Key edge detectors including Sobel, Prewitt, Differences of Gaussian, Laplacian of Gaussian, Roberts, Kirsch and Canny Edge detector were used for preprocessing of facial expression images. Viola Jones facial detection algorithm and local feature extraction algorithms, local directional patterns as well as k-nearest neighbor algorithms are used for image detection, feature extraction and classification respectively. The best results were based on the Cohn-Kanade database (CK+) with local directional patterns, canny edge detector and k-nearest neighbor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call