Abstract

<italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Machine learning has made it possible to mount powerful attacks through side channels that are otherwise challenging to exploit. However, due to the black-box nature of machine learning models, these attacks can be difficult to interpret correctly. Models that simply find correlations cannot be used to analyze the various sources of information leakage behind an attack. This article highlights the limitations of relying on machine learning for side-channel attacks without completing a comprehensive security analysis. We show that a state-of-the-art website-fingerprinting attack powered by machine learning was only partially analyzed. Its authors were misled into believing their attack exploited a cache-based side channel, when it actually exploited an interrupt-based side channel. We demonstrate this through a comprehensive analysis, in which we run controlled experiments to rule out alternative hypotheses about the attack’s primary source of leakage, and ultimately instrument the attack’s code to prove our hypothesis.</i>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call