Abstract

Oblivious RAM (ORAM) remains a bittersweet protection of memory access patterns because of its prohibitively high overhead. The root cause is that ORAM hides intended accesses among a sufficiently large number of dummy accesses. Most existing optimizations mitigate memory accesses using architectural enhancements (e.g., cache) yet few of them improve the efficiency of ORAM primitives per se. In this paper, we identify path-grained static scheduling as a fundamental ORAM performance bottleneck. We propose level-grained dynamic scheduling that directly optimizes ORAM primitives to boost efficiency. It enables ORAM to service more than one request per path and write paths batch wise. We can thus boost ORAM efficiency through handling queued requests as soon as possible and remove as many redundant accesses as possible. Since optimized memory accesses still target the same set of paths, dynamic scheduling preserves ORAM security. We implement dynamic scheduling through Hitchhiker ORAM. In comparison with the state-of-the-art primitive-optimized Fork Path ORAM, Hitchhiker ORAM yields 31.5% fewer memory accesses, 60.2% shorter latency, and 40.7% less energy consumption, being 2.5× faster. In comparison with the state-of-the-art architecture-optimized <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\rho$</tex-math></inline-formula> , Hitchhiker ORAM is 1.5× faster and the integrated version— <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\rho$</tex-math></inline-formula> -Hitchhiker ORAM is 2.0× faster.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call