Abstract

Trust recommendation is a vital recommendation system application based on social networks. It can recommend items based on the trust between users, which can alleviate data sparseness and enhance the interpretability of results. A large number of recommendation algorithms have been proposed, but most of them believe that trust is fixed, ignoring the changes of trust in the process of interaction. In addition, deep learning models are good at solving complex tasks and processing high-dimensional data, and they can model recommendation algorithms, but they are insufficient in capturing changes in user preferences timely. Therefore, given the shortcomings of existing researches, we propose a DDPG-TR algorithm based on deep reinforcement learning to capture the changes in user preferences and update the trust between users. The algorithm uses deep deterministic policy gradient algorithm DDPG to model the user-item interaction process. Firstly, we improve a state representative structure to express the user’s state, which is convenient to capture the changes in user preferences. Then, when the user accepts the recommendation, algorithm combines trust and similar information to predict item score, as well as calculates the difference of the score. Finally, Agent gets the score feedback and uses the difference to update trust. Experiments are conducted on three datasets, and they are verified that the DDPG-TR algorithm can provide more accurate recommendation results, compared with other recommendation algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call