Abstract

Adders are the primary components in the data-path logic of a microprocessor, and thus, adder design has been always a critical issue in the very large-scale integration (VLSI) industry. However, it is infeasible for designers to obtain optimal adder architecture by exhaustively running EDA flow due to the extremely large design space. Previous arts have proposed the machine learning-based framework to explore the design space. Nevertheless, they fall into suboptimality due to a two-stage flow of the learning process and less efficient nor effective feature representations of prefix adder structures. In this article, we first integrate a variational graph autoencoder and a neural process (NP) into an end-to-end, multibranch framework, which is termed the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">graph neural process</i> . The former performs automatic feature learning of prefix adder structures, whilst the latter one is designed as an alternative to the Gaussian process. Then, we propose a sequential optimization framework with the graph NP as the surrogate model to explore the Pareto-optimal prefix adder structures with tradeoff among Quality-of-Result (QoR) metrics, such as power, area, and delay. The experimental results show that compared with state-of-the-art methodologies, our framework can achieve a much better Pareto frontier in multiple QoR metric spaces with fewer design-flow evaluations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.