Abstract

Focusing on the privacy issues in recommender systems, we propose a framework containing two perturbation methods for differentially private collaborative filtering to prevent the threat of inference attacks against users. To conceal individual ratings and provide valuable predictions, we consider some representative algorithms to calculate the predicted scores and provide specific solutions for adding Laplace noise. The DPI (Differentially Private Input) method perturbs the original ratings, which can be followed by any recommendation algorithms. By contrast, the DPM (Differentially Private Manner) method is based on the original ratings, which perturbs the measurements during implementation of the algorithms and releases the predicted scores. The experimental results showed that both methods can provide valuable prediction results while guaranteeing DP, which suggests it is a feasible solution and can be competent to make private recommendations.

Highlights

  • In the Internet age, users are constantly troubled by information overload, since they cannot get really useful parts of large amounts of information

  • The experimental results of this paper showed that the DPI method performs better than the DPM method, which is consistent with the conclusion in literature [8]

  • We addressed the problem of differentially private collaborative filtering based on existing algorithms

Read more

Summary

Introduction

In the Internet age, users are constantly troubled by information overload, since they cannot get really useful parts of large amounts of information. The Laplace noise is incorporated into various global effects and covariance matrix of user rating vectors based on item-item similarities. Given these noisy measurements, several algorithms (the k-Nearest Neighbor method [10] and the standard SVD-based prediction mechanism) are employed to make private recommendations directly. From the works in literature [5, 6], we choose to calculate the predicted scores by using all users’ ratings, not the recommended list, which can make full use of data information for the estimation of noise error. We propose a differential privacy framework for collaborative filtering, which includes three existing algorithms to calculate the predicted scores and adopts two methods of adding Laplace noise to conceal individual ratings and provide valuable prediction results.

Background
The Proposed Method
Analysis of Privacy
Experiments
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call