Abstract

Given the challenges in recommendation effectiveness, communication costs, and privacy issues associated with federated learning, the current algorithm amalgamates locality sensitive hash (LSH) with three federated recommendation models: Generalized Matrix Factorization, Multilayer Perceptions, and Neural Matrix Factorization. First, the participation weights of the model are determined based on the participation degree of the federated learning clients to improve the efficiency of joint learning. Second, the local parameters of the federated aggregation model are divided into two groups to protect user embedding. Finally, rapid mapping and similarity retrieval of the upload parameters are performed using LSH to protect user privacy and shorten training time. We conducted experiments to compare the performance differences between LSH-based and Laplace noise-based differential privacy methods in terms of recommendation effectiveness, communication costs, and privacy preservation. Experimental results demonstrate that LSH models achieved a favorable balance between recommendation effectiveness and privacy protection, with improved time performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call