Abstract

The inability to perceive visual and other non-verbal cues for individuals with visual impairment can pose a significant challenge for their correct conversational interactions and can be an impediment for various daily life activities. Recent advancements in computational resources, particularly the computer vision capabilities can be utilized to design effective applications for visually impaired people (VIP).Among various assistive technologies, automated facial impression recognition with real-time accurate interpretation can be proven useful to tackle the above problem. Using such approach, facial emotions (e.g., sad, happy) can be robustly recognized and conveyed to the associated individuals. In this paper, a partial transfer learning approach was adopted utilizing a custom trained Convolutional Neural Network (CNN) for facial emotion recognition. A novel model that transfers features from one dataset to another is proposed. This model enables the transfer of features learned from a small number of instances to solve new challenges instances. Using the proposed approach based on a newly trained CNN, a portable lightweight facial expression recognition system with wireless connectivity and high detection accuracy was constructed and targeted specifically for VIP. The proposed recognition model provides a notable improvement over the current state-of-the-art, by providing the highest recognition accuracy of 82.1% on the enhanced Facial Expression Recognition 2013 (FER2013) FER2013 dataset. Moreover, with only 1.49M parameters, the model is operable on edge devices with limited memory and processing power. Overall, three labeled emotions happy, sad, surprise were recognized by the model with high accuracy whereas a relatively lower accuracy rate for anger, disgust, fear was noticed with higher misclassification labels for sad.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call