Abstract

The challenge in cache-based attacks on cryptographic algorithms is not merely to capture the cache footprints during their execution but to process the obtained information to deduce the secret key. Our principal contribution is to develop a theoretical framework based upon which our AES key retrieval algorithms are not only more efficient in terms of execution time but also require up to 75% fewer blocks of ciphertext compared with previous work. Aggressive hardware prefetching greatly complicates access-driven attacks since they are unable to distinguish between a cache line fetched on demand versus one prefetched and not subsequently used during a run of a victim executing AES. We implement a multi-threaded spy code that reports accesses to the AES tables at the granularity level of a cache block. Since prefetching greatly increases side-channel noise, we develop sophisticated heuristics to “clean up” the input received from the spy threads. Our key retrieval algorithms process the sanitized input to recover the AES key using only about 25 blocks of ciphertext in the presence of prefetching and, stunningly, a mere 2–3 blocks with prefetching disabled. We also derive analytical models that capture the effect of varying false positive and false negative rates on the number of blocks of ciphertext required for key retrieval.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call