Abstract

In distributed or multiparty computations, optimization theory methods offer appealing privacy properties compared to cryptography and differential privacy methods. However, unlike cryptography and differential privacy, optimization methods currently lack a formal quantification of the privacy they can provide. The main contribution of this paper is to propose a quantification of the privacy of a broad class of optimization approaches. The optimization procedures generate a problem’s data ambiguity for an adversarial observer, which thus observes the problem’s data within an uncertainty set. We formally define a one-to-many relation between a given adversarial observed message and an uncertainty set of the problem’s data. Based on the uncertainty set, a privacy measure is then formalized. The properties of the proposed privacy measure are analyzed. The key ideas are illustrated with examples, including localization and average consensus.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.