Abstract

We address the following problem: how to execute any algorithm $P$, for an unbounded number of executions, in the presence of an adversary who observes partial information on the internal state of the computation during executions. The security guarantee is that the adversary learns nothing, beyond $P$'s input-output behavior. Our main result is a compiler, which takes as input an algorithm $P$ and a security parameter $\kappa$ and produces a functionally equivalent algorithm $P'$. The running time of $P'$ is a factor of ${\rm poly}(\kappa)$ slower than $P$. $P'$ will be composed of a series of calls to ${\rm poly}(\kappa)$-time computable subalgorithms. During the executions of $P'$, an adversary algorithm ${\cal A}$, which can choose the inputs of $P'$, can learn the results of adaptively chosen leakage functions---each of bounded output size $\tilde{\Theta}(\kappa)$---on the subalgorithms of $P'$ and the randomness they use. We prove that any computationally unbounded ${\cal A}$ observing the results of computationally unbounded leakage functions will learn no more from its observations than it could given black-box access only to the input-output behavior of $P$. Unlike all prior work on this question, this result does not rely on any secure hardware components and is unconditional. Namely, it holds even if $P=NP$.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call