Abstract

Collaborative filtering recommends potentially interesting content to users based on historical data that it collects from users, which can lead to privacy breaches by untrusted servers. Differential privacy is a strict definition of privacy, and user privacy protection in collaborative filtering through differential privacy methods has drawn widespread attention from researchers. However, perturbing user data in collaborative filtering through existing differential privacy techniques will lead to the problem of poor recommendation accuracy. This is because differential privacy techniques generally perturb user data independently and will destroy the data similarity required by collaborative filtering. This paper proposes a differentially private user data perturbation method RDPCF, which perturbs the user data within a given content similarity range, and constrains the perturbation probability by differential privacy definition, thus ensuring both the accuracy of using the perturbation results for collaborative filtering and user privacy protection. Experimental results show that the RDPCF considerably outperforms the existing methods regarding privacy protection level and recommendation accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call