Abstract

While high-level synthesis (HLS) tools offer faster design of hardware accelerators with different area versus delay tradeoffs, HLS-based delay estimates often deviate significantly from results obtained from ASIC logic synthesis (LS) tools. Current HLS tools rely on simple additive delay models which fail to capture the downstream optimizations performed during LS and technology mapping. Inaccurate delay estimates prevent fast and accurate design-space exploration without performing time-consuming LS tasks. In this work, we exploit different machine learning models which automatically learn to map the different downstream optimizations onto the HLS critical paths. In particular, we compare graph-based and nongraph-based learning models to investigate their efficacy, devise hybrid models to get the best of the both worlds. To carry out our learning-assisted methodology, we create a dataset of different HLS benchmarks and develop an automated framework, which extends a commercial HLS toolchain, to extract essential information from LS critical path and automatically matches this information to HLS path. This is a nontrivial task to perform manually due to difference in level of abstractions. Finally, we train the proposed hybrid models through inductive learning and integrate them in the commercial HLS toolchain to improve delay prediction accuracy. Experimental results demonstrate significant improvements in delay estimation accuracy across a wide variety of benchmark designs. We demonstrate that the graph-based models can infer essential structural features from the input design, while incorporating them into traditional nongraph-based models can significantly improve the model accuracy. Such “hybrid” models can improve delay prediction accuracy by 93% compared to simple additive models and provide <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$175\times $ </tex-math></inline-formula> speedup compared to LS. Furthermore, we discuss key insights from our experiments, identifying the influence of different HLS features on model performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.