Abstract
In recent years, many large-scale information systems in the Internet of Things (IoT) can be converted into interdependent sensor networks, such as smart cities, smart medical systems, and industrial Internet systems. The successful application of Facial Expression Recognition in the IoT will make our algorithms faster, more convenient, lower overall costs, providing better business practices, and enhance sustainability. Facial Expression Recognition (FER)is essential to effectively communicate between human and machines. Video facial expression detection is a crucial component for gauging the driver's mood for driver assistance system. The emotions of the driver have a significant role in dictating the behavior of the driver, according to several studies, which can lead to disastrous car crashes. However, criteria affecting the identification of driver emotions with the right kind of monitoring include changes in stance, lighting, and occlusions. Based on the restoration of the blurred facial region, the Driver Facial Expression Emotion Recognition (DFEER) system was developed to address these issues. One of the first things to do when dealing with a sequence of blurred faces is to calculate the optical fluxes between the frames. After that, an optical flow reconstruction using a trained PMVO is performed to fix the occlusion-induced damage (Parallel Multi-Verse Optimizer). The major solutions are randomly split into groups of occluded and non-occluded optical flows using a parallel technique, and the groups discuss their findings after a set number of iterations. After optical flows have been rebuilt, they are used immediately in the classification phase of expression prediction. In this study, Very Deep Convolution Networks (VGGNet) proposed a method for recognising human emotions. Both the CK+ (Cohn-Kanade database) and the KMU-FED (Keimyung University Facial Expression of Drivers) databases are used to carry out the assessments of the classification model's efficacy. Accuracy, recall, precision, and f-measure the effectiveness of the suggested strategy is then evaluated using the findings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.