Abstract

AbstractThe drive for automation and constant monitoring has led to rapid development in the field of Machine Learning (ML). The high accuracy offered by the state-of-the-art ML algorithms like Deep Neural Networks (DNNs) has paved the way for these algorithms to being used even in the emerging safety-critical applications, e.g., autonomous driving and smart healthcare. However, these applications require assurance about the functionality of the underlying systems/algorithms. Therefore, the robustness of these ML algorithms to different reliability and security threats has to be thoroughly studied and mechanisms/methodologies have to be designed which result in increased inherent resilience of these ML algorithms. Since traditional reliability measures like spatial and temporal redundancy are costly, they may not be feasible for DNN-based ML systems which are already super computer and memory intensive. Hence, new robustness methods for ML systems are required. Towards this, in this chapter, we present our analyses illustrating the impact of different reliability and security vulnerabilities on the accuracy of DNNs. We also discuss techniques that can be employed to design ML algorithms such that they are inherently resilient to reliability and security threats. Towards the end, the chapter provides open research challenges and further research opportunities.

Highlights

  • Machine learning (ML) has emerged as the principal tool for performing complex tasks which are impractical to code by humans

  • Aging is the gradual degradation of the hardware due to different physical phenomena like Hot carrier Injection (HCI), Negative-Bias Temperature Instability (NBTI), and Electromigration (EM)

  • Neurons are the fundamental computational units in a neural network where each neuron performs a weighted sum of inputs, using the inputs and the weights associated with each input connection of the neuron

Read more

Summary

Introduction

Machine learning (ML) has emerged as the principal tool for performing complex tasks which are impractical (if not impossible) to code by humans. ML techniques provide machines the capability to learn from experience and thereby learn to perform complex tasks without much (if any) human intervention. Deep Learning (DL), using Deep Neural Networks (DNNs), has shown state-of-the-art accuracy, even surpassing human-level accuracy in some cases, for many applications [31]. These applications include, but are not limited to, object detection and localization, speech recognition, language translation, and video processing [31]. The state-of-the-art performance of the DL-based methods has led to the use of DNNs in complex safety-critical applications, for example, autonomous driving [11] and smart healthcare [10].

Artusi University of Cyprus, Nicosia, Cyprus
Data Manipulation
Deep Neural Networks
Hardware Accelerators for Deep Neural Networks
Reliable Deep Learning
Our Methodology for Designing Reliable DNN Systems
Resilience of DNNs to Reliability Threats
Resilience of DNNs to Permanent Faults
Resilience of DNNs to Timing Faults
Resilience of DNNs to Memory Faults
Permanent Fault Mitigation
Timing Fault Mitigation
TE-Drop
Per-Layer Voltage Underscaling
Security Attacks on DNNs
Adversarial Perturbation Attacks
Gradient Sign Methods
Optimization-based Approaches
Backdoor Attacks
Defences Against Security Attacks on DNNs
Generative Adversarial Networks
Case Study
Findings
Open Research Challenges
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.