Abstract

Deep neural networks (DNNs) have been widely used in many cyber–physical systems (CPSs). However, it is still a challenging work to deploy DNNs in real-time systems. In particular, the execution time of DNN inference must be predictable, s.t. it could be known whether the runtime inference can complete within a required timing constraint. Moreover, the timing constraints may change dynamically with the runtime environment in many embedded applications, such as autonomous cars. A possible way to meet such dynamic real-time requirements is to execute different subnetworks of a DNN at runtime. However, improper construction of subnetworks may not only introduce unpredictable inference time, s.t. the real-timing constraints could be violated unexpectedly, but also has poor compatibility with the well-optimized machine learning framework (e.g., TensorFlow). In this article, we study the predictability when executing different subnetworks of a DNN. In particular, we present a featurewise runtime adaptation framework for DNN inference, which is implemented and validated on NVIDIA Jetson TX2 and Nano with TensorFlow. The experimental results show that our method can achieve predictable inference time in comparison with the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call