Abstract

Over the past few years, the computation capability of field-programmable gate arrays (FPGAs) has increased tremendously. This has led to the increase in the complexity of the designs implemented on FPGAs and to the time taken by the FPGA back-end flow. The FPGA back-end flow comprises of many steps, and routing is one of the most critical steps among them. Routing normally constitutes more than 50% of the total time taken by the back-end flow and an optimization at this step can lead to overall optimization of the back-end flow. In this work, we propose enhancements to the routing step by incorporating a reinforcement learning (RL)-based framework. In the proposed RL-based framework, we use the ϵ-greedy approach and customized reward functions to speed up the routing step while maintaining similar or better quality of results (QoR) as compared to the conventional negotiation-based congestion-driven routing solution. For experimentation, we use two sets of widely deployed, large heterogeneous benchmarks. Our results show that, for the RL-based framework, the ϵ-greedy greedy approach combined with a modified reward function gives better results as compared to purely greedy or exploratory approaches. Moreover, the incorporation of the proposed reward function in the RL-based framework and its comparison with a conventional routing algorithm shows that the proposed enhancement requires less routing time while giving similar or better QoR. On average, a speedup of 35% is recorded for the proposed routing enhancement as compared to negotiation-based congestion-driven routing solutions. Finally, the speedup of the routing step leads to an overall reduction in the execution time of the back-end flow of 25%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call