Abstract

Placement for Field Programmable Gate Arrays (FPGAs) is one of the most important but time-consuming steps for achieving design closure. This article proposes the integration of three unique machine learning models into the state-of-the-art analytic placement tool GPlace3.0 with the aim of significantly reducing placement runtimes. The first model, MLCong, is based on linear regression and replaces the computationally expensive global router currently used in GPlace3.0 to estimate switch-level congestion. The second model, DLManage, is a convolutional encoder-decoder that uses heat maps based on the switch-level congestion estimates produced by MLCong to dynamically determine the amount of inflation to apply to each switch to resolve congestion. The third model, DLRoute, is a convolutional neural network that uses the previous heat maps to predict whether or not a placement solution is routable. Once a placement solution is determined to be routable, further optimization may be avoided, leading to improved runtimes. Experimental results obtained using 372 benchmarks provided by Xilinx Inc. show that when all three models are integrated into GPlace3.0, placement runtimes decrease by an average of 48%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call