Abstract

A powerful cache timing attack cannot only determine the secret key of a cryptographic cipher accurately but also do so quickly. Cache timing attacks that utilize the shared L1 cache memory are known to have these two characteristics. On the other hand, attacks using the shared last-level cache (LLC) memory are not always successful in obtaining the secret key, and they take considerably longer than an L1 cache attack. This paper leverages the fact that all LLC attacks run on multi-core CPUs, facilitating the attack programs to be parallelized. We show how parallelization can be used to reduce the runtime and improve the attack’s success making it at par with L1 cache attacks. We then propose a new methodology for LLC cache attacks, by which an attacker can maximize the attack success for a given time frame. The only additional requirement is learning about the target system’s runtime behavior, which is done offline. We validate all our claims on a 4-core and a 10-core CPU.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call