Abstract

If the traditional deep learning framework needs to support a new operator, it usually needs to be highly optimized by experts or hardware vendors to be usable in practice, which is inefficient. The deep learning compiler has proved to be an effective solution to this problem, but it still suffers from unbearably long overall optimization time. In this paper, aiming at the XGBoost cost model in Ansor, we train a cost model based on LightGBM algorithm, which accelerates the optimization time without compromising the accuracy. Experimentation with real hardware shows that our algorithm provides 1.8× speed up in optimization over XGBoost, while also improving inference time of the deep networks by 6.1 %.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call