Abstract

Differentially private algorithms uncover information about a population while granting a form of individual privacy to any single member of the population. Research in differential privacy has primarily focused on one of two models. In the central model, a trusted aggregator runs a private algorithm. In the local model, owners of data run private algorithms themselves and an untrusted aggregator computes on the resulting messages. These models have inherent limitations. Solving statistical problems under local privacy demands many more samples than central privacy. On the other hand, central privacy is only possible if data owners grant an aggregator direct access to their data. In this thesis, I introduce and study shuffle privacy, an intermediate model that strives for the benefits of both local and central privacy. Protocols in this model rely on a service that permutes messages uniformly at random, which makes communication anonymous. The model abstracts the PROCHLO analytics system developed at Google [Bittau et al., SOSP '17]. I describe shuffle protocols for statistical tasks like binary sums, histograms, and counting distinct elements. The protocols have provably better accuracy than local protocols and are also robustly private, since they ensure privacy in the face of drop outs. To complement these positive results, I also prove limitations of the model. Specifically, I show that robustly private shuffle protocols cannot learn parity or solve feature selection as accurately as centrally private algorithms.--Author's abstract

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call