Abstract

Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. Furthermore, the rapid proliferation of the Internet of Things will generate a large volume of data to be processed, which will soon overload the capacity of cloud servers. One solution is to process the data at the edge devices themselves, in order to alleviate cloud server workloads and improve latency. However, edge devices are less powerful than cloud servers, and many are subject to energy constraints. Hence, new resource and energy-oriented deep learning models are required, as well as new computing platforms. This paper reviews the main research directions for edge computing deep learning algorithms.

Highlights

  • IntroductionOver the last thirty years, Deep Learning (DL) algorithms have evolved very fast and have become promising algorithms with better results than other previous machine learning approaches [1]

  • Over the last thirty years, Deep Learning (DL) algorithms have evolved very fast and have become promising algorithms with better results than other previous machine learning approaches [1].DL depends on the availability of high-performance computing platforms with a large amount of storage, required for the data needed to train these models [2]

  • The present review focuses on the migration of DL from the cloud to the very end devices, the final layer of edge computing

Read more

Summary

Introduction

Over the last thirty years, Deep Learning (DL) algorithms have evolved very fast and have become promising algorithms with better results than other previous machine learning approaches [1]. High-performance edge computing, including DL algorithms, is necessary to guarantee accuracy and real-time response for self-driving vehicles [10]; Industrial applications: accurate DL algorithms are needed to detect failures in production lines, foment automation in industrial production, and improve industrial management. These workloads must be processed locally and in real-time [17]. The present review focuses on the migration of DL from the cloud to the very end devices, the final layer of edge computing It highlights the increasing importance of the end device for an integrated DL solution, which clears the way for new user applications.

Survey Methodology
Deep Learning
Deep Neural Network Models
Deep Learning Applications
Deep Learning Inference on Edge Devices
Edge-Oriented Deep Neural Networks
Hardware-Oriented Deep Neural Network Optimizations
Computing Devices for Deep Neural Networks at the Edge
Edge Computing Architectures for Deep Learning
Deep Learning Training on Edge Devices
Discussion
Findings
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call