Abstract

Leakage detection is a common tool to quickly assess the security of a cryptographic implementation against side-channel attacks. The Test Vector Leakage Assessment (TVLA) methodology using Welch’s t-test, proposed by Cryptography Research, is currently the most popular example of such tools, thanks to its simplicity and good detection speed compared to attack-based evaluations. However, as any statistical test, it is based on certain assumptions about the processed samples and its detection performances strongly depend on parameters like the measurement’s Signal-to-Noise Ratio (SNR), their degree of dependency, and their density, i.e., the ratio between the amount of informative and non-informative points in the traces. In this paper, we argue that the correct interpretation of leakage detection results requires knowledge of these parameters which are a priori unknown to the evaluator, and, therefore, poses a non-trivial challenge to evaluators (especially if restricted to only one test). For this purpose, we first explore the concept of multi-tuple detection, which is able to exploit differences between multiple informative points of a trace more effectively than tests relying on the minimum p-value of concurrent univariate tests. To this end, we map the common Hotelling’s T2-test to the leakage detection setting and, further, propose a specialized instantiation of it which trades computational overheads for a dependency assumption. Our experiments show that there is not one test that is the optimal choice for every leakage scenario. Second, we highlight the importance of the assumption that the samples at each point in time are independent, which is frequently considered in leakage detection, e.g., with Welch’s t-test. Using simulated and practical experiments, we show that (i) this assumption is often violated in practice, and (ii) deviations from it can affect the detection performances, making the correct interpretation of the results more difficult. Finally, we consolidate our findings by providing guidelines on how to use a combination of established and newly-proposed leakage detection tools to infer the measurements parameters. This enables a better interpretation of the tests’ results than the current state-of-the-art (yet still relying on heuristics for the most challenging evaluation scenarios).

Highlights

  • State-of-the-ArtLeakage detection has become a de facto standard for the fast preliminary assessment of cryptographic implementations against side-channel attacks

  • We show that (i) based on these questions, a good combination of leakage detection tests can lead to meaningful conclusions about the security order of a target implementation, the noise level of its measurements, and the density of informative samples in the traces, and (ii) these recommendations and conclusions range from formal to more heuristic, mostly depending on the implementation and randomness knowledge available

  • Test Vector Leakage Assessment (TVLA) describes an efficient detection methodology based on Welch’s t-test initially proposed by Cryptography Research [CMG+, GJJR11]

Read more

Summary

Introduction

Leakage detection has become a de facto standard for the fast preliminary assessment of cryptographic implementations against side-channel attacks. Leakage detection is thought to accelerate the evaluation process by avoiding the need to conduct numerous different attacks which require expert knowledge [Wag12] It helps to reduce the evaluations’ data complexity[1], since it relies on a simple statistical test to answer the aforementioned question, instead of conducting a complete key recovery. We show that (i) based on these questions, a good combination of leakage detection tests can lead to meaningful conclusions about the security order of a target implementation, the noise level of its measurements, and the density of informative samples in the traces, and (ii) these recommendations and conclusions range from formal to more heuristic, mostly depending on the implementation and randomness knowledge available. It mostly deals with the interpretation of negative detection outcomes (i.e., what can be concluded in the absence of detection?), while our primary focus is on positive detection results, for example in order to assess the “security order” of a masked implementation in a multi-model approach such as [JS17]

Background
Notations
Leakage Detection with Welch’s t-Test and Extensions
Multi-Tuple Detection
Hotelling’s T 2-Test
D-Test for Independent Signals
Simulated Experiments
Simulation Framework
Simulation with Independent Signal
Simulation with Dependent Signal
Practical Experiments
First Case Study
Results
Second Case Study
Discussion and Conclusion
B Investigated Covariance Matrix
C Computational Complexity Evaluations
D Comparisons to alternative TVLA
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.