Abstract

Nowadays, Scratch, as a widely-used educational programming platform, has gathered a huge number of programming users all over the world. Facing massive programming resources, how to make satisfactory programming recommendations has attracted increasing attention, especially on explainable recommendations. Existing Scratch recommendation systems overlook to provide why a project is recommended, which prevents users from making better decisions and trusting in the system. To resolve this problem, we design the Scratch-RL, an explainable reinforcement learning framework over knowledge graphs for Scratch recommendation. First, we devise a preference-driven Actor-Critic network to simulate users' local preferences and explore the potential interested projects along the reasoning paths. In the Actor-Critic network, we elaborate a preference state function, a preference-based reward function, and a preference-conditional action pruning strategy for the agent. Then, we leverage a directive discriminator network to help evaluate the correctness of recommendations from the agent and return an extra guidance reward accordingly. A high guidance reward is given when the agent generates correct recommendations, which guarantees that the agent quickly and accurately comprehends the preferences of users. Finally, we jointly train the Actor-Critic network and the discriminator, when the whole training is done, the reasoning paths are taken as the interpretability of the recommendations. Extensive experiments on both the Scratch data set and public data set show that, Scratch-RL obtains favorable recommendation results compared with the state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call