Abstract
Differential privacy is a promising approach to privacy-preserving data analysis that provides strong worst-case guarantees about the harm that a user could suffer from contributing their data, but is also flexible enough to allow for a wide variety of data analyses to be performed with a high degree of utility. Researchers in differential privacy span many distinct research communities, including algorithms, computer security, cryptography, databases, data mining, machine learning, statistics, programming languages, social sciences, and law.
 Two articles in this issue describe applications of differentially private, or nearly differentially private, algorithms to data from the U.S. Census Bureau. A third article highlights a thorny issue that applies to all implementations of differential privacy: how to choose the key privacy parameter ε,
 This special issue also includes selected contributions from the 3rd Workshop on Theory and Practice of Differential Privacy, which was held in Dallas, TX on October 30, 2017 as part of the ACM Conference on Computer Security (CCS).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.