Abstract

Federated recommender systems (FRSs), with their improved privacy-preserving advantages to jointly train recommendation models from numerous devices while keeping user data distributed, have been widely explored in modern recommender systems (RSs). However, conventional FRSs require transmitting the entire model between the server and clients, which brings a huge carbon footprint for cost-conscious cross-device learning tasks. While several efforts have been dedicated to improving the efficiency of FRSs, it's suboptimal to treat the whole model as the objective of compact design. Besides, current research fails to handle the out-of-vocabulary (OOV) issue in real-world FRSs, where the items only occasionally appear in the testing phase but were not observed during the training process, which is another practical challenge and has not been well studied yet. To this end, we propose a privacy-enhanced federated recommendation framework with shared hash embedding, PrivFR, in cross-device settings, which is an efficient representation mechanism specialized for the embedding parameters without compromising the model capability. Specifically, it represents items in a resource-efficient way by delicately utilizing shared hash embedding and multiple hash functions. As such, it just maintains a small shared pool of hash embedding in local clients, rather than fitting all embedding vectors for each item, which can exactly achieve the dual advantages of conserving resources and handling the OOV issue. What's more, we prove that this mechanism can protect the data privacy of local clients from a theoretical perspective. Extensive experiments show that our method not only effectively reduces storage and communication overheads, but also outperforms state-of-the-art FRSs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call