Abstract

In Software-Defined Networks (SDNs), determining how to efficiently achieve Quality of Service (QoS)-aware routing is challenging but critical for significantly improving the performance of a network, where the metrics of QoS can be defined as, for example, average latency, packet loss ratio, and throughput. The SDN controller can use network statistics and a Deep Reinforcement Learning (DRL) method to resolve this challenge. In this paper, we formulate dynamic routing in an SDN as a Markov decision process and propose a DRL algorithm called the Asynchronous Advantage Actor-Critic QoS-aware Routing Optimization Mechanism (AQROM) to determine routing strategies that balance the traffic loads in the network. AQROM can improve the QoS of the network and reduce the training time via dynamic routing strategy updates; that is, the reward function can be dynamically and promptly altered based on the optimization objective regardless of the network topology and traffic pattern. AQROM can be considered as one-step optimization and a black-box routing mechanism in high-dimensional input and output sets for both discrete and continuous states, and actions with respect to the operations in the SDN. Extensive simulations were conducted using OMNeT++ and the results demonstrated that AQROM 1) achieved much faster and stable convergence than the Deep Deterministic Policy Gradient (DDPG) and Advantage Actor-Critic (A2C), 2) incurred a lower packet loss ratio and latency than Open Shortest Path First (OSPF), DDPG, and A2C, and 3) resulted in higher and more stable throughput than OSPF, DDPG, and A2C.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call