Abstract

Conventional pseudo-random number generator (PRNG) is vulnerable to machine learning (ML) attacks since algorithms are used to generate the random number. Physical unclonable function (PUF) is a kind of hardware security primitive that can also be cracked by ML attacks. However, the main security difference between a regular PRNG and a PUF is that training the output data of a regular PRNG is sufficient to break the PRNG while the challenge-to-response pairs of a PUF must be available for a successful training. In order to design a ML-resistant PRNG, in this Letter, the output data of a regular PRNG is fed into a PUF to generate the encrypted data first. Then the encrypted data is added to the output data of the other regular PRNG to create the output data for the ML-resistant PRNG. Since the input challenge of the PUF is concealed, the adversary is unable to model the PUF with ML techniques. The result shows that the training accuracy of a single output bit of the ML-resistant PRNG is only about 52.6% even if 200,000 data are sampled for training. In contrast, only 50,000 data are adequate to break a regular PRNG if ML attacks are executed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.