Abstract

Nowadays, the leakage of personal information and privacy becomes a major concern in various fields, such as social sciences, genomics, and medicine. To combat this problem, differential privacy has been proposed and soon be widely studied and applied. Meanwhile, the compositional nature of differential privacy has motivated the design and implementation of differentially private machine learning mechanisms. However, as these mechanisms are gradually deployed in practice, a serious problem appears: they find it hard to choose a meaningful ε value or understand the meaning of a chosen ε value in practice. To this end, we propose a novel and efficient approach that would allow users to choose ε according to their utility or accuracy requirement. Specifically, we can efficiently obtain the expected ε value that would generate a private model satisfying the expected empirical loss or expected accuracy, through at least one-round training and some calculations. As product requirements often impose hard accuracy constraints, our approach allows users to focus on their own benefits without paying too much attention to things they don’t understand or care. Both theoretical analysis and experimental results demonstrate high accuracy and broad applicability of our mechanism in practical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call