Abstract

Federated learning is a decentralized machine learning approach where multiple participants collaboratively train machine learning models. With the development of quantum computing, there has been significant potential in the integration of quantum computing and federated learning. However, existing research has demonstrated that, similar to classical federated learning models, quantum federated learning models also face various security threats and privacy leakage issues. This paper proposes a quantum federated learning model based on quantum noise. Adding quantum noise to the model not only addresses privacy leakage, but also enhances the model robustness, effectively resists adversarial attacks. Specifically, extensive numerical simulations are conducted using various datasets to evaluate the effectiveness of the proposed method. The results reveal a more pronounced variation in robust training in high-dimensional datasets compared to low-dimensional datasets. Furthermore, the impact of noise intensity on model robustness is explored. Experimental demonstrate that a small amount of quantum noise does not have a significant impact on accuracy, and as the noise increases, the robustness of the model also improves. Finally, three different types of quantum noise were used for robustness testing in the paper to analyze the impact of quantum noise on the robustness of quantum machine learning models. The abundant experimental results have verified that the noise can improve the security of distributed quantum machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call