Abstract

We propose a scheme to prevent the machine learning (ML) attacks against physically unclonable functions (PUFs). A silicon PUF is a security primitive in a semiconductor chip that generates a unique identifier by exploiting device variations. However, some PUF implementations are vulnerable to ML attacks, in which an attacker tries to obtain the mathematical clone of the target PUF to predict its responses. Our scheme adds intentional noise to the responses to disturb ML by an attacker so that the clone fails to be authenticated, while the original PUF can still be correctly authenticated using an error correction code (ECC). The effectiveness of this scheme is not very obvious because the attacker can also use the ECC. We apply the countermeasure to n-XOR arbiter PUFs to investigate the feasibility of the proposed scheme. We explain the relationship between the prediction accuracy of the clone and the number of intentional noise bits. Our scheme can successfully distinguish a clone from the legitimate PUF in the case of 5-XOR PUF.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.