Abstract

In nanometer scale manufacturing, process variations have a significant impact on circuit performance. To handle them, post-silicon clock tuning buffers can be included into the circuit to balance timing budgets of neighboring critical paths. The state of the art is a sampling-based approach, in which an integer linear programming (ILP) problem must be solved for every sample. The runtime complexity of this approach is the number of samples multiplied by the required time for an ILP solution. Existing work tries to reduce the number of samples but still leaves the problem of a long runtime unsolved. In this paper, we propose a machine learning approach to reduce the runtime by learning the positions and sizes of post-silicon tuning buffers. Experimental results demonstrate that we can predict buffer locations and sizes with a very good accuracy (90% and higher) and achieve a significant yield improvement (up to 18.8%) with a significant speed-up (up to almost 20 times) compared to existing work.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.