Abstract

We propose a new mechanism to implement differential privacy. Unlike the usual mechanism based on adding a noise whose magnitude is proportional to the sensitivity of the query function, our proposal is based on the refinement of the user's prior knowledge about the response. Our mechanism is shown to have several advantages over noise addition: it does not require complex computations, and thus it can be easily automated; it lets the user exploit her prior knowledge about the response to achieve better data quality; and it is independent of the sensitivity of the query function (although this can be a disadvantage if the sensitivity is small). Furthermore, we give a general algorithm for knowledge refinement and we show some compounding properties of our mechanism for the case of multiple queries; also, we build an interactive mechanism on top of knowledge refinement and we show that it is safe against adaptive attacks. Finally, we give a quality assessment for the responses to individual queries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call