Abstract

Data-center networks (DCN) possess multiple new features: coexistence of elephant flow/mice flow/coflow, and coexistence of multiple network resources (bandwidth, cache and computing). The cache should be a factor of effecting routing decision because it can eliminate redundant traffic in DCN. However, the conventional routing schemes cannot learn from their previous experiences regarding network abnormalities (such as, congestion), and their metric are still the single link state (such as, hop, distance, and cost) which does not include the effect of cache. Thus, they cannot enough efficiently allocate these resources to well meet the performance requirements for various flow types. Therefore, this paper proposes deep reinforcement learning-based routing (DRL-R). Firstly, we propose a method that recombines multiple network resources with different metrics, where we recombine cache and bandwidth by quantifying their contribution score in reducing the delay. Secondly, we propose a routing scheme with resource-recombined state. By optimally allocating network resources for traffic, a DRL agent deployed on a software-defined networking (SDN) controller continually interacts with the network to adaptively perform reasonable routing according to the network state. We employ deep Q-network (DQN) and deep deterministic policy gradient (DDPG) to build the DRL-R. Finally, we demonstrate the effectiveness of DRL-R through extensive simulations. Benefitting from continuous learning with a global view, DRL-R has lower flow completion time, higher throughput and better load balance as well as better robustness, compared to OSPF. In addition, because it efficiently utilizes the network resources, DRL-R can also outperform another DRL-based routing scheme (namely TIDE). Compared to OSPF and TIDE, respectively, DRL-R can improve throughput by up to 40% and 18.5%; DRL-R can reduce flow completion time by up to 47% and 39%; DRL-R can improve the link load balance by up to 18.8% and 9.3%. Additionally, we observed that DDPG has better performance than DQN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call