Abstract
Modern computer architectures rely on caches to reduce the latency gap between the CPU and main memory. While indispensable for performance, caches pose a serious threat to security because they leak information about memory access patterns of programs via execution time. In this paper, we present a novel approach for reasoning about the security of cache algorithms with respect to timing leaks. The basis of our approach is the notion of leak competitiveness, which compares the leakage of two cache algorithms on every possible program. Based on this notion, we prove the following two results: First, we show that leak competitiveness is symmetric in the cache algorithms. This implies that no cache algorithm dominates another in terms of leakage via a program's total execution time. This is in contrast to performance, where it is known that such dominance relationships exist. Second, when restricted to caches with finite control, the leak-competitiveness relationship between two cache algorithms is either asymptotically linear or constant. No other shapes are possible.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.