Abstract

Nowadays, the digital chip design flow starts with formal specifications, which are mapped to Register Transfer Level (RTL) models using different underlying (micro-) architectures. By doing so, a hardware designer predicts and resolves time-critical parts to achieve an RTL-design that meets all constraints after synthesis. However, wrong predictions can be detected only later in the design flow, thus leading to long design iterations. Classical methods estimating delay in early design stages are constrained to the type of components or are computationally expensive for larger designs. This paper proposes a machine learning-based approach to estimate pin-to-pin delays for RTL combinational circuits. To improve the quality of the predictions we combine slew and delay estimation. To that end, training data are built using features of components generated by a model-driven hardware generator framework. Ground truth labels for delays, slews, and their interdependencies are extracted using open-source tools for logic synthesis and static timing analysis. Two different datasets are built: one targeting logic gates and multiplexers and an enlarged one, which generalizes to more RTL primitives A model trained using the former dataset achieves, on average, a coefficient of determination R2 of 87% when evaluating over 4-bit prefix adders. Using the enlarged dataset, the best model reaches an R2 of 77%. On average, our models are 8.4× faster w.r.t the time required to run synthesis and timing analysis. Results show that generalizing to more primitives decreases the models’ performance, but the runtime benefit is maintained. Based on the delay estimation, critical areas of the design can be detected and proper microarchitecture decisions can be taken earlier and faster in the design flow.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call