Abstract

Physical designers typically employ heuristics to solve challenging problems in global routing. However, these heuristic solutions are not adaptable to the ever-changing fabrication demands, and the experience and creativity of designers can limit their effectiveness. Reinforcement learning (RL) is an effective method to tackle sequential optimization problems due to its ability to adapt and learn through trial and error. Hence, RL can create policies that can handle complex tasks. This work presents an RL framework for global routing that incorporates a self-learning model called RL-Ripper. The primary function of RL-Ripper is to identify the best nets that need to be ripped and rerouted in order to decrease the number of total short violations. In this work, we show that the proposed RL-Ripper framework’s approach can reduce the number of short violations for ISPD 2018 Benchmarks when compared to the state-of-the-art global router CUGR. Moreover, RL-Ripper reduced the total number of short violations after the first iteration of detailed routing over the baseline while being on par with the wirelength, VIA, and runtime. The proposed framework’s major impact is providing a novel learning-based approach to global routing that can be replicated for newer technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call