Abstract
Abstract Differential privacy has garnered significant attention for its ability to protect the privacy of parameters uploaded in federated learning. Two key factors in differential privacy, threshold clipping and the privacy budget, determine the amount of noise added to the data to achieve privacy protection. However, traditional differential privacy approaches do not account for the unique dynamic characteristics of the federated learning process, often resulting in trade-offs between privacy protection and model performance. To address this limitation, this study proposes a federated learning framework that combines dynamic threshold clipping and adaptive privacy budgeting. By adjusting these parameters in response to the changes during training, our method applies privacy protection more precisely without compromising model performance. The framework effectively balances privacy preservation and model accuracy by dynamically adapting to the training process and data distribution. Experimental results demonstrate that our approach significantly outperforms traditional differential privacy-based federated learning algorithms.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have