Abstract

In this article, we approach to construct reliable deep neural networks (DNNs) for safety-critical artificial intelligent applications. We propose to modify rectified linear unit (ReLU), a commonly used activation function in DNNs, to tolerate the faults incurred by bit-flip perturbation on weights. Through theoretic analysis of the fault propagation in the layers with ReLU activation, we observe that bounding the output of ReLU activation can help to tolerate the weight faults. Then, we propose a novel ReLU design called boundary-aware ReLU (BReLU) to improve the reliability of DNNs, in which an upper bound of ReLU is determined such that the deviation between the boundary and original outputs cannot affect the final result. We propose a gradient-ascent-based algorithm to find the boundaries for BReLU activations of all DNN layers. Without retraining the network, our approach is cost effective and practical when deployed in safety-critical artificial intelligent systems. Detailed experiments and real-life application benchmarking demonstrate that our approach can improve the accuracy of DNN VGG16 from 16.7% to 82.6% on average assuming the practical weight faults, with only 13% memory and 2.78% time overhead, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call