Abstract

In this study, the classification study of human facial expressions in real-time images is discussed. Implementing this work in software have some benefits for us. For example, analysis of mood in group photos is an interesting instance in this regard. The perception of people’s facial expressions in photographs taken during an event can provide quantitative data on how much fun these people have in general. Another example is context-aware image access, where only photos of people who are surprised can be accessed from a database. Seven different emotions related to facial expressions were classified in this context; these are listed as happiness, sadness, surprise, disgust, anger, fear and neutral. With the application written in Python programming language, classical machine learning methods such as k-Nearest Neighborhood and Support Vector Machines and deep learning methods such as AlexNet, ResNet, DenseNet, Inception architectures were applied to FER2013, JAFFE and CK+ datasets. In this study, while comparing classical machine learning methods and deep learning architectures, real-time and non-real-time applications were also compared with two different applications. This study conducted to demonstrate that real-time expression recognition systems based on deep learning techniques with the most appropriate architecture can be implemented with high accuracy via computer hardware with only one software. In addition, it is shown that high accuracy rate is achieved in real-time applications when Histograms of Oriented Gradients (HOG) is used as a feature extraction method and ResNet architecture is used for classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call