Abstract

Recently, several research works have emphasized the problem of stealing the intellectual property of trained Machine Learning (ML) models from hardware neural network inference engines spotlighting Binarized Neural Networks (BNNs). The binary operations in BNNs can be executed bitwise, which notably saves storage memory, reduces the execution time and power and, therefore, makes them convenient for implementation in hardware. Unfortunately, these advantages may also enable a vulnerability to Differential Power Analysis (DPA) side-channel attacks, which, in turn, necessitates dedicated masking techniques to protect the models. Notably, the recent BNN hardware inference engines are being increasingly adopted for critical applications and demand, along with security, also high levels of in-filed reliability throughout their lifetime. The state-of-the-art power side-channel masking in BNNs implies glitch-resistant structures, such as Trichina AND gates and sequences of flip-flops, and may create soft-error reliability issues that are currently overlooked in the literature. This paper presents an analysis for the soft-error reliability jeopardy by the security countermeasures in hardware implementations of BNN inference engines. Our work reveals a steep increase (hundreds of times) of vulnerability to single-event effects, introduced by the state-of-the-art security enhancement techniques, and emphasizes the interdependency of the design’s reliability and security aspects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call