Abstract

We explore a new design space for tree-based oblivious RAM (ORAM) constructions, which has not received much attention from the research community. Concretely, our approach is to dynamically reorder the sequence of input requests into batches, such that the portion of the paths shared by the requests in the batch is maximized. In this way, the amount of data fetched per ORAM access can be significantly diminished, thus saving I/O bandwidth. Our results show that the average performance gain is of between 5%–35% over the baseline ORAM, even in real workloads with causal dependencies, which confirms the practical utility of dynamic batching.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call