This article is dedicated to discussing the online and offline mixed teaching evaluation of MOOC based on deep neural networks. Deep neural networks are an important means to solve various problems in various fields. It can evaluate the teaching attitude of teachers, the teaching content in the classroom, the teacher’s narrative ability, the teaching methods used by the teachers, and whether the teaching methods are rigorous. And it can train on a large number of datasets evaluated by students on a certain course and get results. This article first explains the advantages of the neural network model and explains the reasons for the emergence of MOOCs and the mixing with traditional classrooms. It also explains some deep neural network (DNN) models and algorithms, such as BP neural network model and algorithms. This model uses backpropagation. When there is an error between the output sample of the neural network and the target sample, the error can be backpropagated to adjust the threshold and weight to make the error reach the minimum. The algorithm steps include forward propagation and backpropagation and are substituted into the gradient descent method to obtain the weight change of the output layer and the hidden layer. It also explains the Gaussian model in DNNs. The given training data vector in the Gaussian mixture model and the configuration of GMM are used for expectation maximization training using an iterative algorithm, and the unsupervised clustering accuracy ACC is applied to evaluate its performance. Use pictures to describe the mixed-mode teaching mode in the MOOC environment. It is necessary to consider teaching practice conditions, time, location, curriculum resources, teaching methods and means, etc. It can cultivate students’ spatial imagination, engineering consciousness, creative design ability, drawing hand-made ability, and logical thinking abilities. It enables teachers to accept the fair and just evaluation of students. Finally, this article discusses the parallelization and optimization of GPU-based DNN models, splits the DNN models, and combines different models to calculate weight parameters. This article combines model training and data training in parallel to increase the processing speed under the same amount of data, increase the batch, increase the accuracy, and reduce the training shock. It can be concluded that its DNN model has greatly improved the training effect performance of the MOOC online and offline mixed course effect dataset. The calculation time is shortened, the convergence speed is accelerated, the accuracy rate is improved, and the acceleration ratio is increased, which compares with the same period of the previous year increase of more than 37.37%. The accuracy has increased, comparing with the same period of the previous year, an increase of more than 12.34%.
Read full abstract