Abstract

Physical Unclonable Functions (PUFs) exploit the manufacturing process variations inherent in silicon-based chips to generate unique secret keys. Although PUFs are supposed to be unclonable or unbreakable, researchers have found that they are vulnerable to machine learning (ML) attacks. In this article, we analyze the vulnerability of different FPGA-based Ring Oscillator PUFs (ROPUFs) to machine learning attacks. The challenge-response pairs (CRPs) data obtained from different ROPUFs is trained using different machine learning algorithms. From the study, it is found that the Artificial Neural Network (ANN) models can be used to train the ROPUFs with a training accuracy of 99.9% and a prediction accuracy of 62% when 5,000 CRPs are used for a challenge-response ROPUF. In this article, we assume a realistic situation where a small set of the CRP dataset (approximately 15% maximum) is unscrupulously obtained by the hacker. A prediction accuracy of 62% makes the PUF vulnerable to machine learning attacks. Therefore, a secondary goal of this article is the design of a ROPUF capable of thwarting machine learning modeling attacks. The modified XOR-inverter ROPUF drastically reduces the prediction accuracy from 62% to 13.1%, thus making it increasingly difficult for hackers to attack the ROPUF.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call