Abstract

AbstractQuantized neural networks are proposed for reduced computation and memory costs. When quantized neural networks are designed for edge or terminal devices, they may be more vulnerable to adversarial perturbations. We focus on the extreme cases, i.e. binarized neural networks (BNNs) where weights and activations are binarized, and investigate their adversarial robustness. Six different binarized neural networks are considered with their full-precision counterpart as the baseline. We conduct the first empirical study on adversarial robustness over various BNNs in terms of their naive adversarial robustness, the effectiveness of adversarial training on BNNs, and explore attack transferability among BNNs. Our analysis provides quantitative study on how BNNs perform in terms of model accuracy and adversarial robustness.KeywordsAdversarial robustnessBinarized neural networkAdversarial trainingAttack transferability

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call