In order to solve the limitations of flipped classroom in personalized teaching and interactive effect improvement, this paper designs a new model of flipped classroom in colleges and universities based on Virtual Reality (VR) by combining the algorithm of Contrastive Language-Image Pre-Training (CLIP). Through cross-modal data fusion, the model deeply combines students' operation behavior with teaching content, and improves teaching effect through intelligent feedback mechanism. The test data shows that the similarity between video and image modes reaches 0.89, which indicates that different modal information can be effectively integrated to ensure the semantic consistency and intuitive understanding of teaching content. The minimum Kullback-Leibler (KL) divergence is 0.12, which ensures the stability of data distribution and avoids information loss. The accuracy of automatically generating feedback reaches 93.72%, which significantly improves the efficiency of personalized learning guidance. In the adaptability test of virtual scene, the frequency of scene adjustment is 2.5 times/minute, and the consistency score is stable above 8.6, ensuring the consistency of teaching goals under complex interaction. This paper aims to enhance personalized learning experience, improve teaching efficiency and autonomous learning effect through VR technology and intelligent feedback, and promote the innovation of interactive teaching mode.
Read full abstract