Abstract

The escalating demands for network capacities catalyze the adoption of space division multiplexing (SDM) technologies. With continuous advances in multi-core fiber (MCF) fabrication, MCF-based SDM networks are positioned as a viable and promising solution to achieve higher transmission capacities in multi-dimensional optical networks. However, with the extensive network resources offered by MCF-based SDM networks comes the challenge of traditional routing, modulation, spectrum, and core allocation (RMSCA) methods to achieve appropriate performance. This paper proposes an RMSCA approach based on deep reinforcement learning (DRL) for MCF-based elastic optical networks (MCF-EONs). Within the solution, a novel state representation with essential network information and a fragmentation-aware reward function were designed to direct the agent in learning effective RMSCA policies. Additionally, we adopted a proximal policy optimization algorithm featuring an action mask to enhance the sampling efficiency of the DRL agent and speed up the training process. The performance of the proposed algorithm was evaluated with two different network topologies with varying traffic loads and fibers with different numbers of cores. The results confirmed that the proposed algorithm outperforms the heuristics and the state-of-the-art DRL-based RMSCA algorithm in reducing the service blocking probability by around 83% and 51%, respectively. Moreover, the proposed algorithm can be applied to networks with and without core switching capability and has an inference complexity compatible with real-world deployment requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call