Abstract

Adaptive routing is crucial to the overall performance of network-on-chips (NoCs), and still faces great challenges, especially when emerging applications on many-core architecture exhibit complicated and time-varying traffic patterns. When witnessing most existing heuristic adaptive routing algorithms fail to address multi-objective optimization for complex traffic well, we decide to try and explore a new approach of thinking and extracting insights from network behaviors. Reinforcement learning methods have demonstrated promising opportunity applied to architecture design exploration, however not been well applied on adaptive routing design. We make the first attempt to propose a novel and comprehensive reinforcement learning framework for adaptive routing on NoCs, called RELAR. RELAR is suitable for diversified traffic patterns and resolve multi-objective optimization simultaneously. It is able to effectively isolate endpoint congestion when facing adversary hot-spot and bursty traffic, and achieve dynamic load-balancing and mitigate network congestion when meeting heavy uniform traffic. We utilize state-of-the-art high-performance interconnection benchmark, GPCNeT, as traffic generators to generate rich network congestion workloads and thus enhance online-training efficiency of RELAR. We conduct extensive experiments against state-of-the-art routing algorithms to evaluate our design. The results show that RELAR achieves 14.82% and 9.86% reduction in packet latency on average, and reduces packet latency by up to 34.24% and 16.82% under heavy synthetic traffic workload and high-performance interconnection benchmark, respectively. We also perform cost analysis to validate potential implementation of RELAR on NoCs with low computation, storage and power.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call