Abstract

We present an optimization framework for solving multiagent convex programs subject to inequality constraints while keeping the agents’ state trajectories private. Each agent has an objective function depending only upon its own state and the agents are collectively subject to global constraints. The agents do not directly communicate with each other but instead route messages through a trusted cloud computer. The cloud adds noise to data being sent to the agents in accordance with the framework of differential privacy and, thus, keeps each agent's state trajectory private from all other agents and any eavesdroppers. This private problem can be viewed as a stochastic variational inequality, and it is solved using a projection-based method for solving variational inequalities that resemble a noisy primal-dual gradient algorithm. Convergence of the optimization algorithm in the presence of noise is proven, and a quantifiable tradeoff between privacy and convergence is extracted from this proof. Simulation results are provided that demonstrate numerical convergence for both $\epsilon$ -differential privacy and $(\epsilon, \delta)$ -differential privacy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call