Abstract
Humans have the capability to deliver many emotions during a conversation. Facial expressions show information about emotions. The major issue is to understand the facial expression during communication. Every face is an index of the mind. The objective of this study is to design a framework which has the ability to recognize human facial expression. Permanent and temporary facial expressions appear during conversation and detect using different face detection techniques. In this study, an emotion-based face identification system has been proposed by employing different machine learning approaches. Taiwanese Facial Expression Image Database (TFEID) has been used for three types of facial expression such as Angry, Happy and Sad. Each facial expression (Angry, Happy and Sad) contains 40 images and calculate total 120 (40 x 3) images dataset. For image pre-processing, Median filter has been employed on this dataset and converted color images to grayscale. Six non-overlapping regions of interest (ROIs) have been taken on every image and calculate 720 (120 x 6) ROIs on the overall dataset. Texture (T), Histogram (H) and Binary (B) features have been calculated on these three categories and extracted 43 features on each (ROIs) and calculated total 30960 (720 x 43) features vector pace on the deployed dataset. The Best First Search (BFS) algorithm has been implemented for feature optimization. The optimized dataset has been deployed to different machine learning classifiers namely Random Sub Space, Random Committee, Bagging, Random Forest, J48 and LMT. TreeRandom Forest has shown the best overall accuracy results among the deployed classifiers. The overall accuracy results of 95.277% has been observed by Tree Random Forest.
Highlights
I N human communication, facial emotions play a very conduct disorder associated with impairments facial emotion recognition (FER)
The optimized dataset employed of machine vision (MV) classifiers namely random forest (RF), logistic (Lg), and J48 are obtained very promising accuracy 96.33%, 95.67%, and 95.33% respectively
Oral The researcher described the promising sectors in the field components transmit one-third of human communication and of FER, processing images streamed in real-time from a mobile non-verbal components transmit two-thirds [2]
Summary
Obtain 96.25% accuracy [7]. The researcher proposed Male and female adolescents with. Eye tracking was used to relate categorization performance to Generally, people conclude other people's emotional states, participants’ allocation of overt attention [8] Such as happy, sad, and angry, using facial expressions. Facial expression is an The researcher described facial expression datasets such as important part of nonverbal communication; because the facial eyes, nose, lips, and chin, etc. Obtain the required evidence, the facial expression data is The researcher described edge detection algorithms for eyes divided into several sections which are applied in different and lips variation during human communication. IMAGE DATASET eyes, and skin, and random forest, support vector machine (SVM) classifiers used for classification and obtained 96.25% accuracy [5].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.