Abstract

With the significant success of machine learning, there are plenty of innovative neural network designs nowadays. The related applications become more and more pervasive in our daily life, even in life-critical domains such as autopilot and medical diagnosis, etc. In these domains, whether the AI-based system is “secure” or not is a critical issue. In this work, we first present six Hardware Trojan attacks with demonstrations of their impacts on the hardware design of neural networks. When data leakage occurs, we encode the leakage data to the output and make it more difficult to be detected. Most of our attacks can either achieve more than 98% attack success rate or leak out confidential data without causing any functional violation, with less than 1.5% overhead. We also discuss how to effectively and efficiently detect these Hardware Trojans with formal verification methods and further propose a risk assessment process to constitute a priority guidance to suggest security verification tasks of neuron network hardware. Based on our results, we strongly suggest that security specification and total verification are essential to neuron network designs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call