Federated learning enables the development of robust models without accessing users data directly. However, recent studies indicate that federated learning remains vulnerable to privacy leakage. To address this issue, local differential privacy mechanisms have been incorporated into federated learning. Nevertheless, local differential privacy will reduce the availability of data. To explore the balance between privacy budgets and data availability in federated learning, we propose federated learning for clustering hierarchical aggregation with adaptive piecewise mechanisms under multiple privacy-FedAPCA as a way to balance the relationship between privacy preservation and model accuracy. First, we introduce an adaptive piecewise mechanism that dynamically adjusts perturbation intervals based on the data ranges across different layers of the model, ensuring minimized perturbation variance while maintaining the same level of privacy. Second, we propose two dynamic privacy budget allocation methods, which are allocating the privacy budget based on global accuracy and global loss, and allocating the privacy budget based on local accuracy and loss, to ensure that better model accuracy can be achieved under the same privacy budget. Finally, we propose a clustering hierarchical aggregation method in the model aggregation stage, and the model is updated and aggregated after the unbiased estimation of the disturbance in each cluster according to the variance of each layer. FedAPCA improves the balance between privacy preservation and model accuracy. Our experimental results, comparing FedAPCA with the SOTA multi-privacy local differential privacy federated learning frameworks on the MNIST and CIFAR-10 datasets, demonstrate that FedAPCA improves model accuracy by 1%–2%.
Read full abstract