Abstract

In this paper, we present a novel approach to predicting in-class performance using log data from course learning, which is important in the field of personalized education and classroom management. Specifically, a set of fine-grained features is extracted from unit learning log data to train a prediction model based on long short-term memory (LSTM). However, to enhance the accuracy of the model, we introduce moth flame optimization-attention-LSTM (MFO-Attention-LSTM) as an improvement to the conventional LSTM-attention model. The MFO algorithm is utilized instead of the traditional backward propagation method to calculate attention layer parameters, thereby allowing the model to jump out of local optima. The proposed model outperforms the SVM, CNN, RNN, LSTM, and LSTM-Attention models in terms of the F1 score. Empirical results demonstrate that the optimization of the MFO algorithm contributes significantly to the improved performance of the prediction model. In conclusion, the proposed MFO-Attention-LSTM model offers a promising solution for predicting in-class performance using log data from course learning and could provide valuable insights for personalized education and classroom management.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call