Abstract

Abstract Computing a function of some private inputs while maintaining the confidentiality of those inputs is an important problem, to which Differential Privacy and Secure Multi-party Computation can offer solutions under specific assumptions. Research in randomised algorithms aims at improving the privacy of such inputs by randomising the output of a computation while ensuring that large distortions of outputs occur with low probability. But use cases such as e-voting or auctions will not tolerate large distortions at all. Thus, we develop a framework for randomising the output of a privacypreserving computation, while guaranteeing that output distortions stay within a specified bound. We analyse the privacy gains of our approach and characterise them more precisely for our notion of sparse functions. We build randomisation algorithms, running in linearithmic time in the number of possible input values, for this class of functions and we prove that the computed randomisations maximise the inputs’ privacy. Experimental work demonstrates significant privacy gains when compared with existing approaches that guarantee distortion bounds, also for non-sparse functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call