Abstract
At present, there is a serious disconnect between online teaching and offline teaching in English MOOC large-scale hybrid teaching recommendation platform, which is mainly due to the problems of cold start and matrix sparsity in the recommendation algorithm, and it is difficult to fully tap the user's interest characteristics because it only considers the user's rating but neglects the user's personalized evaluation. In order to solve the above problems, this paper proposes to use reinforcement learning thought and user evaluation factors to realize the online and offline hybrid English teaching recommendation platform. First, the idea of value function estimation in reinforcement learning is introduced, and the difference between user state value functions is used to replace the previous similarity calculation method, thus alleviating the matrix sparsity problem. The learning rate is used to control the convergence speed of the weight vector in the user state value function to alleviate the cold start problem. Second, by adding the learning of the user evaluation vector to the value function estimation of the state value function, the state value function of the user can be estimated approximately and the discrimination degree of the target user can be reflected. Experimental results show that the proposed recommendation algorithm can effectively alleviate the cold start and matrix sparsity problems existing in the current collaborative filtering recommendation algorithm and can dig deep into the characteristics of users' interests and further improve the accuracy of scoring prediction.
Highlights
MOOC is the abbreviation of large-scale online courses, which has attracted widespread attention from academic circles for its advantages of large-scale and openness
Based on the research of recommendation algorithms at home and abroad, aiming at the problems existing in current recommendation algorithms, this study improves the traditional collaborative filtering recommendation algorithm in two aspects. e main contributions of this paper are as follows: (1) To introduce the idea of reinforcement learning, this paper proposes to measure the similarity between users by comparing the state value functions between users instead of the previous similarity calculation method to alleviate the matrix sparsity problem and solve the cold start problem by controlling the convergence speed of the weights in the state value functions
Since the state of the agent at the moment only depends on the state of the agent at the current moment and the action to be taken, the process has Markov property. e reinforcement learning median function estimation method is an extremely important method; that is, the optimal strategy is obtained through the state value function or the action value function of the agent
Summary
MOOC is the abbreviation of large-scale online courses, which has attracted widespread attention from academic circles for its advantages of large-scale and openness. Open means that the course has no access conditions and entry threshold, and anyone can use the learning resources free of charge anywhere [1,2,3,4]. Classroom teaching is composed of three parts: group report, classroom answering, practice, and classroom assessment. At present, this teaching mode has been applied in ideological and political courses and English courses, and the teaching effect has been well received. MOOC mode unifies students’ online and offline and brings them into the credit range, which is more stimulating for students’ learning
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.