Abstract

Deep learning frameworks are mainly divided into pytorch in academia and tensorflow in industry, where pytorch is a dynamic graph and tensor flow is a static graph, both of which are essentially directed and loopless computational graphs. In TensorFlow, data input into the model requires a good computational graph structure to be executed, and static graphs have more optimization methods and higher performance. The node of the graph is OP and the edge is tensor. The static diagram is fixed after the compilation is completed, so it is easier to deploy on the server. How to compile a static graph. It is found that in the compilation process of static graphs, the configuration of the compiler (config) affects the way the compiler compiles and optimizes the model, and ultimately affects the running time of the model. We propose a reliable model, which can predict the best compilation configuration of the model according to the compilation configuration and runtime of the machine learning model in the training dataset to minimize the running time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call