Abstract

Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in a wide range of real-world applications such as healthcare, online banking, etc. However, recent advances in deep learning have opened up a largely unexplored surface for adversarial attacks, jeopardizing the integrity of DNN systems. DNNs are vulnerable to adversarial samples that are generated by perturbing correctly classified inputs to cause DNN models to mispredict. This can potentially lead to catastrophic consequences, especially in critical and security-sensitive applications such as autonomous driving and video surveillance systems.We propose ensemble of approximate multipliers (EAM), an architectural technique that uses a combination of approximate multipliers to protect DNNs against adversarial attacks. Approximate computing is known for its effectiveness in improving the energy-efficiency of computing platforms at the cost of slight accuracy loss. Using approximation in the form of inexact multipliers is also effective for increasing robustness by injecting input-dependent noises into the outputs of DNNs. However, resiliency of DNNs against malicious attacks depends on the type of an approximate multiplier. Based on level of approximation, robustness of DNNs changes by a large margin. We exploit this variability and propose using an ensemble of different types of approximate multipliers. EAM increases robustness across a wide range of attack scenarios. In particular, we show that successful attacks against an exact DNN have poor transferability to EAM. The transferability is also poor for a black-box model where perturbed inputs are generated by a substitute model. In addition, EAM is resilient against a white-box attack scenario where an attacker has full knowledge of approximate hardware. Our ensemble technique does not require duplication of computing cores nor memory units. EAM only uses additional approximate compressors within the original multipliers. We conduct extensive experiments on a set of strong adversarial attacks. We empirically show that EAM increases robustness over the exact model with negligible impact on accuracy of benign inputs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call