Abstract

Simulated annealing (SA) is one of the most common FPGA placement techniques, and is used both as a standalone algorithm and to improve an initial analytical placement. While SA-based placers can achieve high-quality results, they suffer from long runtimes. In this article, we introduce <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">RLPlace</i> , a novel SA-based FPGA placer that utilizes both reinforcement learning (RL) and targeted perturbations (directed moves). The proposed moves target both wirelength and timing optimization and explore the solution space more efficiently than traditional random moves while preventing oscillation in the Quality of Results (QoR). RL techniques are used to dynamically select the most effective move types as optimization progresses. The experimental results show that <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">RLPlace</i> outperforms the widely used VTR 8 placer across all runtime/quality tradeoff points, achieving better QoR placement solutions in less runtime. On average, across the Titan23 suite of large FPGA benchmarks, RLPlace can reduce CPU time by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> with result quality comparable to VTR 8, or improve wirelength by 8% (at a high CPU time budget) −26% (at a low CPU time budget) versus VTR 8.0 given the same CPU time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call