Abstract
AbstractHelper data algorithms reliably extract secrets from physical unclonable functions. The necessary helper data can leak information, though. One state-of-the-art approach to assess the remaining min-entropy is limited to homogeneous bias or correlation, not both. Another one extends this to only local bias without correlation but is limited to short code lengths. This work presents a new approach for determining the min-entropy based on convolving histograms. It provides a better bound and good approximation given arbitrary bias, more realistic correlation effects, and practically relevant code sizes. Experiments on real-world and synthetic data show the benefit of the new method compared with state-of-the-art ones. This work also facilitates a better understanding of how the error correction as post-processing impacts the min-entropy.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.