Abstract
In this paper, nearly 40 commonly used deep neural network(DNN) models are selected, and their cross-platform and cross-inference frameworks are deeply analysed. The main metrics of accuracy, the total number of model parameters, the computational complexity, the accuracy density, the inference time, the memory consumption and other related parameters are used to measure their performance. The heterogeneous computing experiment is implemented on both the Google Colab cloud computing platform and the Jetson Nano embedded edge computing platform. The obtained performance is compared with that of two previous computing platforms: a workstation equipped with an NVIDIA Titan X Pascal and an embedded system based on an NVIDIA Jetson TX1 board. In addition, on the Jetson Nano embedded edge computing platform, different inference frameworks are investigated to evaluate the inference efficiency of the DNN models. Regression models are established to characterize the variation in the computing performance of different DNN classification algorithms so that the inference results of unknown models can be estimated. ANOVA methods are proposed to quantify the differences between models. The experimental results have important guiding significance for the better selection, deployment and application of DNN models in practice. Codes are available at this https URL https://github.com/Foreverzfy/Model-Test.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.