Abstract

Differential privacy is a confidentiality property for database queries which allows for the release of statistical information about the content of a database without disclosing personal data. The variety of database queries and enforcement mechanisms has recently sparked the development of a number of mechanized proof techniques for differential privacy. Personal data, however, are often spread across multiple databases and queries have to be jointly computed by multiple, possibly malicious, parties. Many cryptographic protocols have been proposed to protect the data in transit on the network and to achieve differential privacy in a distributed, adversarial setting. Proving differential privacy for such protocols is hard and, unfortunately, out of the scope of the aforementioned mechanized proof techniques. In this work, we present the first framework for the mechanized verification of distributed differential privacy. We propose a symbolic definition of differential privacy for distributed databases, which takes into account Dolev-Yao intruders and can be used to reason about compromised parties. Furthermore, we develop a linear, distance-aware type system to statically and automatically enforce distributed differential privacy in cryptographic protocol implementations (expressed in the RCF calculus). We also provide an algorithmic variant of our type system, which we prove sound and complete. Finally, we tested our analysis technique on a recently proposed protocol for privacy-preserving web analytics: we discovered a new attack acknowledged by the authors, proposed a fix, and successfully type-checked the revised variant.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call