Abstract

Deep neural networks are widely used in many artificial intelligence applications. They have demonstrated state-of-the-art accuracy on many artificial intelligence tasks. For this high accuracy to occur, deep neural networks require the right parameter values. This is achieved by a process known as training . The training of large amounts of data via many iterations comes at a high cost in regard to computation time and energy. Optimal resource allocation would therefore reduce the training time. TensorFlow, a computational graph library developed by Google, alleviates the development of neural network models and provides the means to train these networks. In this article, we propose Adaptive HTF-MPR to carry out the resource allocation, or mapping, on TensorFlow. Adaptive HTF-MPR searches for the best mapping in a hybrid approach. We applied the proposed methodology on two well-known image classifiers: VGG-16 and AlexNet. We also performed a full analysis of the solution space of MNIST Softmax. Our results demonstrate that Adaptive HTF-MPR outperforms the default homogeneous TensorFlow mapping. In addition to the speed up, Adaptive HTF-MPR can react to changes in the state of the system and adjust to an improved mapping.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.