Abstract

With the rapid growth of the number of devices connected to the Internet, there is a trend to move intelligent processing of the generated data with deep neural networks (DNNs) from cloud servers to the network edge. Performing inference and training of DNNs in edge hardware is motivated by latency constraints, security and privacy concerns, and restricted network bandwidth. However, implementation of DNNs is challenging in resource-constrained edge devices. This article surveys recent advances in the efficient processing of DNNs, highlighting present research trends and future challenges. Specifically, we start by reviewing optimization methods for hardware-aware deployment of DNNs. We then present some case studies of promising new directions towards low-complexity on-chip training. Finally, we discuss future challenges and their potential solutions for efficient deployment of DNNs at the edge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call