Spurred by the simultaneous need for data privacy protection and data sharing, federated learning (FL) has been proposed. However, it still poses a risk of privacy leakage in it. This paper, an improved Differential Privacy (DP) algorithm to protect the federated learning model. Additionally, the Fast Fourier Transform (FFT) is used in the computation of the privacy budget [Formula: see text], to minimize the impact of limited arithmetic resources and numerous users on the effectiveness of training model. Moreover, instead of direct analyses of the privacy budget ε through various methods, Privacy Loss Distribution (PLD) and privacy curves are adopted, while the number of artificial assignments hyperparameters is reduced, and the grid parameters delineated for FFT use are improved. The improved algorithm tightens parameter bounds and minimizes human factors' influence with minimal efficiency impact. It decreases the errors caused by truncation and discreteness of PLDs while expanding the discreteness interval to reduce the calculation workload. Furthermore, an improved activation function using a temper sigmoid with only one parameter [Formula: see text], smooths the accuracy curve and mitigates drastically fluctuating scenarios during model training. Finally, simulation results on real datasets show that our improved DP algorithm, which accounts for long trailing, facilitates a better balance between privacy and utility in federated learning models.
Read full abstract