Abstract

Despite the advance in deep learning technology, assuring the robustness of deep neural networks (DNNs) is challenging and necessary in safety-critical environments, including automobiles, IoT devices in smart factories, and medical devices, to name a few. Furthermore, recent developments allow us to compress DNNs to reduce the size and computational requirements of DNNs to fit them into small embedded devices. However, how robust a compressed DNN can be has not been well studied in addressing its relationship to other critical factors, such as prediction performance and model sizes. In particular, existing studies on robust model compression have been focused on the robustness against off-manifold adversarial perturbation, which does not explain how a DNN will behave against perturbations that follow the same probability distribution as the training data. This aspect is relevant for on-device AI models, which are more likely to experience perturbations due to noise from the regular data observation environment compared with off-manifold perturbations provided by an external attacker. Therefore, this paper investigates the robustness of compressed deep neural networks, focusing on the relationship between the model sizes and the prediction performance on noisy perturbations. Our experiment shows that on-manifold adversarial training can be effective in building robust classifiers, especially when the model compression rate is high.

Highlights

  • Deep neural networks (DNNs) have achieved remarkable success with their powerful performance in various domains, such as visual recognition, natural language processing, and time-series forecasting

  • We investigate the robustness of compressed DNN models against natural noise using on-manifold adversarial examples for the worst-case analysis, in particular at the regime of highly compressed models relevant for deploying DNNs on small embedded systems

  • Our robust model compression method is composed of model compression based on sparse coding and adversarial training based on on-manifold adversarial examples

Read more

Summary

Introduction

Deep neural networks (DNNs) have achieved remarkable success with their powerful performance in various domains, such as visual recognition, natural language processing, and time-series forecasting. Despite these achievements, the sheer size and computational requirements of running DNNs can be problematic when we deploy them in real environments, such as small IoT devices. K }, we solve the following optimization problem in sparse coding [46,47], w∗ ∈ arg min w ∈R p n n. The1 -norm based sparse coding has a practical issue in that it is difficult to determine the value of the hyperparameter λ, which corresponds to a specific compression rate. One would need to solve the above optimization problem for several values of λ’s until he or she finds the one for the desired compression rate

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.