Abstract

Driven by the recent growth in the fields of internet of things (IoT) and deep neural networks (DNNs), DNN-powered IoT devices are expected to transform a variety of industrial applications. DNNs, however, involve many parameters and operations to process the data generated by IoT devices. This results in high data-processing latency and energy consumption. New approaches are thus being souhgt to tackle these issues and deploy real-time DNNs into resource-limited IoT devices. This paper presents a comprehensive review on hardware-and-software-co-design approaches developed to implement DNNs on low-resource hardware platforms. These approaches explore the trade-off between energy consumption, speed, classification accuracy, and model size. First, an overview of DNNs is given. Next, available tools for implementing DNNs on low-resource hardware platforms are described. Then, the memory hierarchy designs together with dataflow mapping strategies are presented. Furthermore, various model optimization approaches, including pruning and quantization, are discussed. In addition, case studies are given to demonstrate the feasibility of implementing DNNs for IoT applications. Finally, detailed discussions, research gaps, and future directions are provided. The presented review can guide the design and implementation of the next generation of hardware and software solutions for real-world IoT applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call