In response to the privacy leakage risk and low-efficiency issues of traditional data access control, especially in distributed environments, data transmission and centralized processing may expose sensitive information. To improve data access control in information platforms, ensure data security, and accelerate global model training, this article introduced a joint mechanism of differential privacy and FedAvgM (Federated Averaging with Momentum) optimization to enhance privacy protection and improve training efficiency. Firstly, the Laplacian noise mechanism was adopted to prevent user privacy information from being leaked during global model training by adding noise to the dataset, ensuring the privacy of each data access. Then, combined with the FedAvgM optimization algorithm, distributed nodes calculated local model parameters separately and merged these parameters through weighted averaging to reduce the training time of the global model and improve efficiency. Finally, layered encryption technology was adopted to add multiple layers of encryption during data transmission to ensure the security of the transmission link. At the same time, a dynamic permission allocation mechanism was introduced to limit the frequency of access to sensitive data. The experimental results demonstrate that when the privacy budget at ε=20, the privacy protection method of Laplace mechanism still maintains 73% accuracy in data transmission, with a privacy leakage risk of only 0.27. Under the same training epochs, the FedAvgM optimization algorithm achieved a correct data transmission accuracy of 97%. In comparison, the convergence speed increased by 0.29%/min in 1-20 training epochs, showing a faster convergence speed. Under four different noise attack methods, the anti-noise ability of layered encryption is the lowest at 15.4 dB, 18.5 dB, 17.1 dB, and 17.3 dB, respectively. This method effectively improves the problems of privacy leakage risk and low efficiency in traditional data access control.
Read full abstract