Abstract

In practice, any integrated physical unclonable function (PUF) must be accessed through a logical interface. The interface may add additional functionalities such as access control, implement a (measurement) noise reduction layer, etc. In many PUF applications, the interface in fact hides the PUF itself: users only interact with the PUF's interface, and cannot see or verify what is behind the interface. This immediately gives rise to a security problem: how does the user know he is interacting with a properly behaving interface wrapped around a proper PUF? This question is not merely theoretical, but has strong relevance for PUF application security: It has been shown recently that a badly behaving interface could, e.g., log a history of PUF queries which an adversary can read out using some trapdoor, or may output false PUF responses that the adversary can predict or influence RvD-IEEESP13. This allows attacks on a considerable number of PUF protocols RvD-IEEESP13. Since we currently do not know how to authenticate proper interface behavior in practice, the security of many PUF applications implicitly rests on the mere assumption that an adversary cannot modify or enhance a PUF interface in a bad way. This is quite a strong hypothesis, which should be stated more explicitly in the literature. In this paper, we explicitly address this point, following and partly expanding earlier works RvD-IEEESP13. We add to the picture the need for rigorous security which is characterized by some security parameter λ (an adversary has negl(λ) probability to successfully software clone/model a PUF). First, this means that we need so-called Strong PUFs with a larger than poly(λ) input/challenge space. In order to have scalable PUF designs (which do not blow up in chip surface or volume for increasing λ), we need PUF designs which constitute of a algebraic composition of smaller basic building blocks/devices. In such compositions the security relies on a less well-established computational hardness assumption which states that machine learning and other modeling methods with poly(λ) runtime cannot reliably produce a software clone of the PUF. To provide rigorous security we argue that the PUF interface needs a one-way postprocessing of PUF responses such that the security can be reduced to the infeasibility of breaking the one-way property of the postprocessing. This leads to a set of interesting problems: how do we add noise reduction into this picture and how do we minimize or eliminate side channel leakage of computed intermediate values in the post processing?

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.