Abstract

Engineering change order (ECO) plays an important role in design flow to perform leakage optimization with gate-sizing and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$V_{\rm th}$</tex-math> </inline-formula> assignment approaches. Unfortunately, it is extremely time consuming due to the iterative nature of cell swap and timing check. Many learning-based methods, especially, graph neural networks (GNNs), have been utilized in leakage optimization to predict <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$V_{\rm th}$</tex-math> </inline-formula> assignment, but most of them treat the cells and their neighborhood cells uniformly when aggregating cell-level topology information to gather design-level information and discard the path-level information, suffering from accuracy loss, which could be exploited by bidirectional long short-term memory (BiLSTM) network. In this work, a GNN-BiLSTM-based framework is proposed to perform commercial-quality <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$V_{\rm th}$</tex-math> </inline-formula> assignment for leakage optimization by learning design-level and path-level information and is validated with the benchmarks from Opencores and IWLS 2005 under TSMC 28 nm technology. The experimental results demonstrate that the proposed framework achieves the most accurate <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$V_{\rm th}$</tex-math> </inline-formula> assignment prediction compared with the competitive models with F1-score ranging from 0.954 to 0.975 for seen designs and from 0.945 to 0.965 for unseen designs, respectively. The divergence between the leakage optimization results of this work and the commercial tool is limited to be between 8.5% and 26.1%, which is reduced by at least 2.2 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times$</tex-math> </inline-formula> compared with prior works. Owing to efficient training convergence and inference speed, our approach achieves significant runtime improvement by up to 10 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times$</tex-math> </inline-formula> over commercial tool with similar leakage optimization results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.