With the development of 5G communication and smart devices, the prosperity of online content has boosted the research of Recommender System (RS). Due to the data scarcity problem, researchers employ knowledge transfer techniques to improve the accuracy of RS. Data sharing or data augmentation are promising methods for such a problem, but the data suffers from privacy leakage during sharing. Thus, Federated Learning (FL) has been adopted to collaboratively train recommender models while preserving data privacy. Federated Recommender System (FRS) combines FL and RS to provide distributed recommendation services to the users. However, the existing FRS suffers from massive communication and system heterogeneity, where the necessity of model transmission and the diversity of the clients bring significant communication overhead to the system. In this paper, we propose Efficient Federated Recommender System with Adaptive Model Pruning and Momentum-based Batch Adjustment ( eFRSA 2 ) to reduce the communication overhead of FRS. eFRSA 2 contains two modules. Adaptive Model Pruning utilizes magnitude pruning to reduce the communication volume and adaptively modifies the compression ratios of different clients to maintain the model accuracy. Momentum-based Batch Adjustment adjusts the local training batch number by a similar method of gradient descent with momentum to align the local computation time of the clients and reduce the communication overhead. The experimental results demonstrate that eFRSA 2 can reduce up to 90% communication volume and mitigate the system heterogeneity by over 75%, demonstrating the priority of eFRSA 2 in training efficiency. Source code can be found at https://github.com/shhjwu5/eFRSA2.
Read full abstract