Abstract
The Learning with Errors (LWE) problem is one of the most important computational problems in modern lattice-based cryptography. It can be viewed as a Bounded Distance Decoding (BDD) problem, which can be reduced to the unique Shortest Vector Problem (uSVP). The standard way to reduce BDD to uSVP is via Kannan’s embedding. At ICALP 2016, Bai, Stehle, and Wen presented an improved theoretical reduction from BDD to uSVP which uses sparsification techniques. So far, the implications of this improved reduction and the use of sparsification to the hardness of LWE have not been studied. In this work, we consider a sparsified embedding attack on LWE which is deduced from the Bai–Stehle–Wen reduction. In particular, we analyze its performance under the so-called 2016 estimate introduced at USENIX 2016 by Alkim, Ducas, Poppelmann, and Schwabe and analyzed at ASIACRYPT 2017 by Albrecht, Gopfert, Virdia, and Wunderer. Our results suggest that in general the sparsified embedding attack does not yield a better attack on LWE in practice than Kannan’s embedding. However, for certain parameter sets and scenarios with a reasonable amount of computing clusters, the use of sparsification may be beneficial.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.