Abstract

Quantitative Information Flow (QIF) and Differential Privacy (DP) are both concerned with the protection of sensitive information, but they are rather different approaches. In particular, QIF considers the expected probability of a successful attack, while DP (in both its standard and local versions) is a max-case measure, in the sense that it is compromised by the existence of a possible attack, regardless of its probability. Comparing systems is a fundamental task in these areas: one wishes to guarantee that replacing a system A by a system B is a safe operation that is the privacy of B is no worse than that of A. In QIF, a refinement order provides strong such guarantees, while, in DP, mechanisms are typically compared w.r.t. the privacy parameter ε in their definition. In this paper, we explore a variety of refinement orders, inspired by the one of QIF, providing precise guarantees for max-case leakage. We study simple structural ways of characterising them, the relation between them, efficient methods for verifying them and their lattice properties. Moreover, we apply these orders in the task of comparing DP mechanisms, raising the question of whether the order based on ε provides strong privacy guarantees. We show that, while it is often the case for mechanisms of the same “family” (geometric, randomised response, etc.), it rarely holds across different families.

Highlights

  • The enormous growth in the use of internet-connected devices and the big-data revolution have created serious privacy concerns, and motivated an intensive area of research aimed at devising methods to protect the users’ sensitive information

  • Two main frameworks have emerged in this area: Differential Privacy (DP) and Quantitative Information Flow (QIF)

  • We will analyze various mechanisms for DP, local differential privacy (LDP), and d-privacy to see in which cases the order induced by ε is consistent with the three orders above

Read more

Summary

Introduction

The enormous growth in the use of internet-connected devices and the big-data revolution have created serious privacy concerns, and motivated an intensive area of research aimed at devising methods to protect the users’ sensitive information. If we consider a gain 1 when the attacker guesses the right class (r versus (either p or a)) and 0 otherwise, we have that the highest possible gain in A is (3/4) π(p) + (1/2) π(a) = 5/12, while in B is (2/3) π(p) + (2/3) π(a) = 4/9, which is higher than 5/12 This is consistent with our orders: it is possible to show that none of the three orders hold between A and B, and that we should not expect B to be better (for privacy) than A with respect to all possible adversaries. A fundamental issue is how to prove that these robust orders hold: Since max Q and prv M involve universal quantifications, it is important to devise finitary methods to verify them To this purpose, we will study their characterizations as structural relations between stochastic matrices (representing the mechanisms to be compared), along the lines of what was done for avg G. We will analyze various mechanisms for DP, LDP, and d-privacy to see in which cases the order induced by ε is consistent with the three orders above

Contribution
Related Work
Plan of the Paper
Average-Case Refinement
Refinement
Max-Case Refinement
Differential Privacy and d-Privacy
Oblivious Mechanisms
Applying Noise to the Data of a Single Individual
Privacy-Based Leakage and Refinement Orders
Privacy as Max-Case Capacity
Privacy-Based Refinement
Application
Preliminaries
Refinement Order within Families of Mechanisms
Refinement Order between Families of Mechanisms
Asymptotic Behavior
Discussion
Lattice Properties
Conclusions
V: QX that maps every prior σ
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call