Abstract

The continuous growth of the network communication demand puts higher requirements on the network infrastructures. The elastic optical network (EON) has great potential to support the continued demands for communication bandwidth. Efficient use of bandwidth resources of EON has been particularly important to alleviate network blocking, which depends on routing, modulation, and spectrum allocation processes (RMSA). However, the time-varying states of EON caused by the uncertainty of future demands make it a lot tougher to realize the online RMSA in real time. To solve the above problem, this paper proposes a kind of Deep Q Network (DQN) algorithm with prioritized experience replay mechanism to perform the RMSA process in real-time. The proposed algorithm includes two parts. One is the Markov Decision Process (MDP) based state transfer for online RMSA by a trained Q-network. The other is an offline DQN-based algorithm for getting a trained Q-network to help the decision-making of RMSA state transfer, where the experience priority replay mechanism and Sumtree are introduced to speed up DQN training. Simulation results show that compared with the traditional Deep Q Network algorithm, the proposed algorithm nearly doubles the Q-network training speed. And compared with the traditional sp + ff algorithm, the trained Q-network reduces the blocking rate by nearly 35 %.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call