Abstract

Abstract The Learning Parity with Noise (LPN) problem represents the average-case analogue of the NP-Complete problem “decoding linear codes”, and it has been extensively studied in learning theory, coding theory and cryptography with applications to quantum-resistant cryptographic schemes. However, LPN also suffers from large public key size which is the common drawback that hinders code-based cryptography from being practical. In this paper, we study a sparse variant of LPN whose public matrix consists of sparse vectors instead of following uniform distribution. We show a win–win argument that at least one of the following assumption is true: (i) either the hardness of sparse LPN is implied by that of the standard LPN under the same noise rate; (ii) or there exists new black-box constructions of public-key encryption schemes and oblivious transfer protocols from standard LPN. Since the second assumption relies on the infeasible noise regimes for LPN-based public-key cryptography, we believe that the first assumption is more likely to hold, i.e. sparse LPN is as hard as standard LPN. Finally, we give a (heuristic) method to further compress the sparse public matrix by evaluating pseudorandom functions with keys made public, whose security again resorts to the aforementioned win–win technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call