Abstract

The ability to interpret emotions from facial expressions is essential in social interaction. However, these abilities are complex for individuals with autism spectrum disorder (ASD). This study aims to develop a real-time human facial expression recognition system on a mobile phone application to help individuals with ASD. The expression recognition model was developed using the FER-2013 dataset with the VGG-16 architecture. The experimental results show that the model obtained from the VGG-16 architecture can produce the best accuracy and average F1 scores with 0.91 and 0.91, respectively. We also experiment with three individuals with ASD, resulting in increasing interest in learning the expression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call