Abstract

Affective computing (AC) has been regarded as a relevant approach to identifying online learners’ mental states and predicting their learning performance. Previous research mainly used one single-source data set, typically learners’ facial expression, to compute learners’ affection. However, a single facial expression may represent different affections in various head poses. This study proposed a dual-source data approach to solve the problem. Facial expression and head pose are two typical data sources that can be captured from online learning videos. The current study collected a dual-source data set of facial expressions and head poses from an online learning class in a middle school. A deep learning neural network using AlexNet with an attention mechanism was developed to verify the syncretic effect on affective computing of the proposed dual-source fusion strategy. The results show that the dual-source fusion approach significantly outperforms the single-source approach based on the AC recognition accuracy between the two approaches (dual-source approach using Attention-AlexNet model 80.96%; single-source approach, facial expression 76.65% and head pose 64.34%). This study contributes to the theoretical construction of the dual-source data fusion approach, and the empirical validation of the effect of the Attention-AlexNet neural network approach on affective computing in online learning contexts.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.