Abstract

News recommendation systems represent a critical solution to the problem of information overload, as they can suggest news that may be of interest to a particular user. Traditional recommendation systems require the collection of private information, which can lead to serious privacy concerns. Federated learning is a privacy-preserving framework that allows multiple users to train a global model without sharing their private data. In federated learning, users keep their private data locally and calculate the local gradients. In recommendation systems, however, the situation is the opposite, as users need to share their preferences with the server. Notably, user preferences are highly relevant to user privacy. The difference between recommendation systems and federated learning may lead to user privacy leakage. Accordingly, in this paper, we propose RD-FedRec, which follows a paradigm commonly used in real-world recommendation systems. First, we propose a randomized decomposition method to protect the privacy of user preferences, which has good compatibility and can preserve the privacy of recommendation results. Second, to improve recommendation efficiency, we introduce a recall phase that roughly filters news, thereby reducing the time overhead of the ranking phase. We implement RD-FedRec and evaluate its performance on two real-world datasets. Experimental results show that the accuracy and efficiency of RD-FedRec are comparable to state-of-the-art recommendation systems that do not provide privacy guarantees, and moreover that our proposed randomized decomposition method is compatible with most recommendation systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call