Abstract

Secure model aggregation across many users is a key component of federated learning systems. The state-of-the-art protocols for secure model aggregation, which are based on additive masking, require all users to quantize their model updates to the same level of quantization. This severely degrades their performance due to lack of adaptation to available communication resources, e.g., bandwidth, at different users. As the main contribution of our paper, we propose <i>HeteroSAg</i>, a scheme that allows secure model aggregation while using heterogeneous quantization. HeteroSAg enables the edge users to adjust their quantization proportional to their available communication resources, which can provide a substantially better trade-off between the accuracy of training and the communication time. Our proposed scheme is based on a grouping strategy by partitioning the network into groups, and partitioning the local model updates of users into segments. Instead of applying aggregation protocol to the entire local model update vector, it is applied on segments with specific coordination between users. We further demonstrate how HeteroSAg can enable Byzantine robustness while achieving secure aggregation simultaneously. Finally, we prove the convergence guarantees of HeteroSAg under heterogeneous quantization in the non-Byzantine scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call