Abstract

Adaptive routing plays a pivotal role in the overall performance of Network-on-Chips (NoCs). However, with many-core architectures supporting complex and constantly changing traffic patterns for emerging applications, this aspect presents significant challenges. Our meticulous examination and analysis of existing heuristic adaptive routing algorithms revealed three key limiting factors: single network status metric, lack of system feedback awareness, and lack of ability to customize. Reinforcement Learning (RL) methods have demonstrated promising opportunities for exploring adaptive routing design. Deep reinforcement learning (DRL) techniques, in particular, enable efficient exploration in adaptive routing design spaces where heuristic strategies may be inadequate. This paper proposes a novel deep reinforcement learning framework for adaptive routing called DRLAR that is suitable for diversified traffic patterns and resolves multi-objective optimization simultaneously. DRLAR formulates routing as an agent, which makes routing decisions based on autonomous learning using multiple network state features and system-level feedback information. We conduct extensive experiments against state-of-the-art routing algorithms to evaluate our design. The results show that DRLAR reduces packet latency by an average of 33.3%, achieving a reduction of 41.6% and 10.5% on average under heavy synthetic traffic and the PARSEC 2.1 benchmark, respectively. We also perform a cost analysis to validate the potential implementation of DRLAR on NoCs with low computational, storage, and power.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call