Abstract

Hard learning problems are important building blocks for the design of various cryptographic functionalities such as authentication protocols and post-quantum public key encryption. The standard implementations of such schemes add some controlled errors to simple (e.g., inner product) computations involving a public challenge and a secret key. Hard physical learning problems formalize the potential gains that could be obtained by leveraging inexact computing to directly generate erroneous samples. While they have good potential for improving the performances and physical security of more conventional samplers when implemented in specialized integrated circuits, it remains unknown whether physical defaults that inevitably occur in their instantiation can lead to security losses, nor whether their implementation can be viable on standard platforms such as FPGAs. We contribute to these questions in the context of the Learning Parity with Physical Noise (LPPN) problem by: (1) exhibiting new (output) data dependencies of the error probabilities that LPPN samples may suffer from; (2) formally showing that LPPN instances with such dependencies are as hard as the standard LPN problem; (3) analyzing an FPGA prototype of LPPN processor that satisfies basic security and performance requirements.

Highlights

  • For more than two decades, learning in the presence of errors has been considered as an interesting source of computationally hard problems [Kea93, Hås97], which have in turn found many applications in the design of provably secure cryptographic schemes [Reg10, Pie12]

  • We show a self-reduction of the Learning Parity with Noise (LPN)-OD problem, which gives in particular an equivalence between the LPN problem with Output Dependencies (LPN-OD) problem and the standard LPN problem, with given parameters

  • Our results suggest that inexact computing is a promising candidate for the secure and efficient implementation of LPN-based authentication protocols

Read more

Summary

Introduction

For more than two decades, learning in the presence of errors has been considered as an interesting source of computationally hard problems [Kea, Hås97], which have in turn found many applications in the design of provably secure cryptographic schemes [Reg, Pie12]. As a result of these limitations, Kamel et al observed that by leveraging recent advances in inexact computing (a trend that is of independent interest in view of the miniaturization of electronic devices [GR10]), one could implement LPN-based schemes without explicitly generating random numbers [KSD+20] They formalized the corresponding problem as the Learning with Physical Noise (LPPN) problem, and highlighted its potential advantages in terms of implementation cost (since it removes the need to explicitly generate randomness for the additive errors) and implementation security (since it prevents the trivial attack probing the randomness in a leaking implementation). We hope our results can be used as a seed towards both the theoretical investigation of other hard physical learning problems and the practical investigation of inexact computing in this context We believe such a risky research path is justified by the high importance of obtaining secure and efficient PQ cryptographic implementations in the future

The LPN problem
The LPPN problem
LPPN physical imperfections
Output data dependencies of the errors and mitigation
Evaluation settings
Output data dependencies results
Structured errors in parallel implementations and mitigation
Reduction between LPN and LPPN
Notations
Reduction
2: AltSampleLPN-ODn 0
Concrete security estimation
Masked LPPN
FPGA prototype
FPGA design
LUT6 clk
Data dependencies and mitigations
Putting things together
Conclusions and further research
A The ASIC LPPN prototype
B Voltage and temperature validation
C Side-channel security evaluation
D Implementation figures and discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.