Abstract

This paper focuses on the distributed online optimization problem in multi-agent systems considering privacy preservation. Each agent exchanges local information with neighboring agents on the strongly connected time-varying directed graphs. Since the process of information transmission is prone to information leakage, a distributed push-sum dual averaging algorithm based on the differential privacy mechanism is proposed to protect the privacy of the data. In addition, to handle situations where the gradient information of the node cost function is unknown, the one-point gradient estimation is designed to calculate the true gradient information and guide the update of the decision variables. With the appropriate choice of the stepsizes and the exploration parameters, the algorithm can effectively protect the privacy of agents while achieving sublinear regret with the convergence rate O(T34). Furthermore, this paper also explores the effect of one-point estimation parameters on the regret in the online setting and investigates the relation between the convergence effect of individual regret and differential privacy levels. Finally, several federated learning experiments were conducted to verify the efficacy of the algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.