Abstract

Matrix factorization (MF) is an effective technique in recommendation systems. Since MF needs to utilize and analyze large amounts of user data during the recommendation process, this may lead to the leakage of personal data. Most of the current privacy-preserving MF research aims to protect explicit feedback, but ignores the protection of implicit feedback. In response to this limitation, we propose an adaptive differentially private MF (ADPMF) for implicit feedback. The proposed model is trained under the framework of Bayesian personalized ranking and uses gradient perturbation to achieve the (ϵ,δ)-differential privacy. In our model, we design two effective methods, adaptive clipping and adaptive noise scale, to improve recommendation performance while maintaining privacy. We use Gaussian Differential Privacy (GDP) to accommodate privacy analysis for dynamically changing clipping thresholds and noise scale. Theoretical analysis and experimental results demonstrate that ADPMF not only achieves highly accurate recommendations but also provides differential privacy protection for implicit feedback. The results show that ADPMF can improve the recommended performance substantially by 10% to 20% compared to the current privacy-preserving recommendation methods and has promising application prospects in various fields.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call