Abstract

The driver's emotional state directly affects safe driving. Under the "vehicle-human-road-cloud" integrated control framework, we propose an end-edge-cloud collaborative emotion perception network model (EEC-Net). The end side extracts the key frames of the driver's face video stream and performs batch compression; the edge side extracts the region of interest (ROI) of the reconstructed images as the input of the emotion recognition model (tiny_Xception) for classification; the cloud control terminal receives abnormal ROI image data and performs online training to dynamically adjust the operating parameters of the edge model. Finally, we test on open and self-built datasets, and the results show tiny_Xception has a significant improvement in accuracy of 2.45% compared to mini_Xception; the EEC-Net model can reliably perceive the negative emotion period, and the overall system memory consumption is reduced by about 5%, and the network transmission data volume and the computation time of emotion recognition are reduced by 95%, 60% respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call