Abstract

The growing use of the Internet of Things (IoT) has increased the volume of data to be processed by manifolds. Edge computing can lessen the load of transmitting a massive volume of data to the cloud. It can also provide reduced latency and real-time experience to the users. This article proposes an emotion recognition system from facial images based on edge computing. A convolutional neural network (CNN) model is proposed to recognize emotion. The model is trained in a cloud during off time and downloaded to an edge server. During the testing, an end device such as a smartphone captures a face image and does some preprocessing, which includes face detection, face cropping, contrast enhancement, and image resizing. The preprocessed image is then sent to the edge server. The edge server runs the CNN model and infers a decision on emotion. The decision is then transmitted back to the smartphone. Two data sets, JAFFE and extended Cohn–Kanade (CK+), are used for the evaluation. Experimental results show that the proposed system is energy efficient, has less learnable parameters, and good recognition accuracy. The accuracies using the JAFFE and CK+ data sets are 93.5% and 96.6%, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call