Abstract

Recommendation models are deployed in a variety of commercial applications in order to provide personalized services for users. However, most of them rely on the users’ original rating records that are often collected by a centralized server for model training, which may cause privacy issues. Recently, some centralized federated recommendation models are proposed for the protection of users’ privacy, which however requires a server for coordination in the whole process of model training. As a response, we propose a novel privacy-aware decentralized federated recommendation (DFedRec) model, which is lossless compared with the traditional model in recommendation performance and is thus more accurate than other models in this line. Specifically, we design a privacy-aware structured client-level graph for the sharing of the model parameters in the process of model training, which is a one-stone-two-bird strategy, i.e., it protects users’ privacy via some randomly sampled fake entries and reduces the communication cost by sharing the model parameters only with the related neighboring users. With the help of the privacy-aware structured client-level graph, we propose two novel collaborative training mechanisms in the setting without a server, including a batch algorithm DFedRec(b) and a stochastic one DFedRec(s), where the former requires the anonymity mechanism while the latter does not. They are both equivalent to PMF trained in a centralized server and are thus lossless. We then provide formal analysis of privacy guarantee of our methods and conduct extensive empirical studies on three public datasets with explicit feedback, which show the effectiveness of our DFedRec, i.e., it is privacy aware, communication efficient, and lossless.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call