Abstract

Protecting users’ privacy has drawn tremendous attention from the community of recommender systems, i.e., both the original data and the learned model parameters should not be exposed. Federated learning is an emerging and promising paradigm, where a server collects gradients from multiple distributed parts and then updates the model parameters with the aggregated gradients. However, there are some security issues neglected in existing works. For example, the server may infer the users’ rating behaviors on the items via the received gradients. In this paper, we focus on heterogeneous collaborative filtering (HCF) by exploiting users’ different types of feedback such as 5-star numerical ratings and like/dislike binary ratings in a privacy-aware manner. Specifically, we design a novel and generic federated matrix factorization algorithm for HCF, i.e., federated collective matrix factorization (FCMF). The main goal of our FCMF is to leverage the heterogeneous feedback data to accurately estimate users’ preferences on the premise of protecting users’ private information. Therefore, we keep the original rating data and the users’ latent feature vectors locally, and choose the low sensitive items’ latent vectors as a bridge for joint training. Furthermore, we use homomorphic encryption and differential privacy to ensure the security of both participants in collective training. To study the effectiveness of our FCMF, we conduct extensive empirical studies on four real-world datasets and find that our FCMF is equivalent to the centralized method that aggregates the heterogeneous data in one single place. Moreover, the introduction of homomorphic encryption and differential privacy do not affect the recommendation accuracy much.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call