Abstract

SummaryTraditional hardware security primitives such as physical unclonable functions (PUFs) are quite vulnerable to machine learning (ML) attacks. The primary reason is that PUFs rely on process mismatches between two identically designed circuit blocks to generate deterministic math functions as the secret information sources. Unfortunately, ML algorithms are pretty efficient in modeling deterministic math functions. In order to resist against ML attacks, in this letter, a novel hardware security primitive named neural network (NN) chain is proposed by utilizing noise data to generate chaotic NNs for achieving authentication. In a NN chain, two independent batches of noise data are utilized as the input and output training data of NNs, respectively, to maximize the uncertainty within the NN chain. In contrast to a regular PUF, the proposed NN chain is capable of achieving over 20 times ML attack‐resistance and 100% reliability with less than 39% power and area overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call