Abstract

Critical to achieving power–performance–area goals, a human engineer typically spends a considerable amount of time tuning the multiple settings of a commercial placer. This article proposes a deep reinforcement learning (RL) framework to optimize the placement parameters of a commercial electronic design automation (EDA) tool. We build an autonomous agent that learns to tune parameters without human intervention and domain knowledge, trained solely by RL from self-search. To generalize to unseen netlists, we use a mixture of handcrafted features from graph topology theory and graph embeddings generated using unsupervised graph neural networks. Our RL algorithms are chosen to overcome the sparsity of data and latency of placement runs. As a result, our trained RL agent achieves up to 11% and 2.5% wire length improvements on unseen netlists compared with a human engineer and a state-of-the-art tool auto-tuner in just one placement iteration ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$20\times $ </tex-math></inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$50\times $ </tex-math></inline-formula> fewer iterations). In addition, the success of the RL agent is measured using a statistical test with theoretical guarantees and an optimized sample size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call