Abstract

The use of video conferencing tools in education has increased dramatically in recent years. Especially after the COVID-19 outbreak, many classes have been moved to online platforms due to social distancing precautions. While this trend eliminates physical dependencies in education and provides a continuous educational environment, it also creates some problems in the long term. Primarily, many instructors and students have reported issues concerning the lack of emotional interaction between participants. During in-place education, the speaker receives immediate emotional feedback through the expressions of the audience. However, it is not possible to fully utilize this valuable feedback in online lectures since current tools can only display a limited number of faces on the screen at a time. In order to alleviate this problem and promote the online education experience one step closer to in-place education, this study presents EduFERA that provides a real-time emotional assessment of students based on their facial expressions during video conferencing. Empirically, several state-of-the-art techniques have been employed for face recognition and facial emotion assessment. The resulting optimal model has been deployed as a Flask Web API with a user-friendly ReactJS frontend, which can be integrated as an extension to current online lecturing systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.