Abstract

Differential privacy (DP) aims to confer data processing systems with inherent privacy guarantees, offering strong protections for personal data. But DP’s approach to privacy carries with it certain assumptions about how mathematical abstractions will be translated into real-world systems, which—if left unexamined and unrealized in practice—could function to shield data collectors from liability and criticism, rather than substantively protect data subjects from privacy harms. This article investigates these assumptions and discusses their implications for using DP to govern data-driven systems. In Parts 1 and 2, we introduce DP as, on one hand, a mathematical framework and, on the other hand, a kind of real-world sociotechnical system, using a hypothetical case study to illustrate how the two can diverge. In Parts 3 and 4, we discuss the way DP frames privacy loss, data processing interventions, and data subject participation, arguing it could exacerbate existing problems in privacy regulation. In part 5, we conclude with a discussion of DP’s potential interactions with the endogeneity of privacy law, and we propose principles for best governing DP systems. In making such assumptions and their consequences explicit, we hope to help DP succeed at realizing its promise for better substantive privacy protections.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call