Abstract

Deep learning methods have been gradually adopted in the Internet-of-Things (IoT) applications. Nevertheless, the large demands for the computation and memory resources make these methods difficult to be deployed in the field as the resource-constrained IoT devices would be overwhelmed by the computations incurred by the inference operations of the deployed deep learning models. In this article, we propose the adaptive computation framework, which is built on top of the distributed deep neural networks (DDNNs), to facilitate the inference computations of a trained DDNN model to be collaboratively executed by the machines in a distributed computing hierarchy, e.g., the end device and the cloud server. By facilitating the trained models to be run on the actual distributed systems, the proposed framework enables the co-design of the distributed deep learning models and systems, since the delivered performance of the models on the systems, in terms of the inference time, consumed energy and model accuracy, is able to be measured and is served as the input for the next cycle for the model/system design. We have built the surveillance system for the object detection application with the prototyped framework. We use the surveillance system as a case study to demonstrate the capabilities of the proposed framework. In addition, the design considerations of developing the DDNN system are shared. With the promising results presented in the article, we believe that the framework paves a road for the automated design process of the distributed deep learning models/systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call