Privacy-preserving aggregation protocol is an essential building block in privacy-enhanced federated learning (FL), which enables the server to obtain the sum of users’ locally trained models while keeping local training data private. However, most of the work on privacy-preserving aggregation provides privacy guarantees for only one communication round in FL. In fact, as FL usually involves long-term training, i.e., multiple rounds, it may lead to more information leakages due to the dynamic user participation over rounds. In this connection, we propose a long-term privacy-preserving aggregation (LTPA) protocol providing both single-round and multi-round privacy guarantees. Specifically, we first introduce our batch-partitioning-dropping-updating (BPDU) strategy that enables any user-dynamic FL system to provide multi-round privacy guarantees. Then we present our LTPA construction which integrates our proposed BPDU strategy with the state-of-the-art privacy-preserving aggregation protocol. Furthermore, we investigate the impact of LTPA parameter settings on the trade-off between privacy guarantee, protocol efficiency, and FL convergence performance from both theoretical and experimental perspectives. Experimental results show that LTPA provides similar complexity to that of the state-of-the-art, i.e., an additional cost of around only 1.04X for a 100,000-user FL system, with an additional long-term privacy guarantee.
Read full abstract