Abstract

AbstractThe analysis of automated solutions for recognition of human facial expression (FER) and emotion detection (ED) is based on Deep Learning (DL) of Convolutional Neural Networks (CNN). The need to develop human FER and ED systems for various platforms, both for stationary and mobile devices, is shown, which imposes additional re-strictions on the resource intensity of the DL CNN architectures used and the speed of their learning. It is proposed, in conditions of an insufficient amount of annotated data, to implement an approach to the recognition of the main motor units of facial activity (AU) based on transfer learning, which involves the use of public DL CNNs previously trained on the ImageNet set with adaptation to the problems being solved. Networks of the MobileNet family and networks of the DenseNet family were selected as the basic ones. The DL CNN model was developed to solve the FER and ED problem of a person and the training method of the proposed model was modified, which made it possible to reduce the training time and computing resources when solving the FER and ED problems of a person without losing the reliability of AU recognition.KeywordsFacial expression recognitionConvolutional neural networkTransfer learningEmotion detectionDeep learning

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.