We propose a novel application of reinforcement learning (RL) with invalid action masking and a novel training methodology for routing and wavelength assignment (RWA) in fixed-grid optical networks and demonstrate the generalizability of the learned policy to a realistic traffic matrix unseen during training. Through the introduction of invalid action masking and a new training method, the applicability of RL to RWA in fixed-grid networks is extended from considering connection requests between nodes to servicing demands of a given bit rate, such that lightpaths can be used to service multiple demands subject to capacity constraints. We outline the additional challenges involved for this RWA problem, for which we found that standard RL had low performance compared to that of baseline heuristics, in comparison with the connection requests RWA problem considered in the literature. Thus, we propose invalid action masking and a novel training method to improve the efficacy of the RL agent. With invalid action masking, domain knowledge is embedded in the RL model to constrain the action space of the RL agent to lightpaths that can support the current request, reducing the size of the action space and thus increasing the efficacy of the agent. In the proposed training method, the RL model is trained on a simplified version of the problem and evaluated on the target RWA problem, increasing the efficacy of the agent compared with training directly on the target problem. RL with invalid action masking and this training method outperforms standard RL and three state-of-the-art heuristics, namely, k shortest path first fit, first-fit k shortest path, and k shortest path most utilized, consistently across uniform and nonuniform traffic in terms of the number of accepted transmission requests for two real-world core topologies, NSFNET and COST–239. The RWA runtime of the proposed RL model is comparable to that of these heuristic approaches, demonstrating the potential for real-world applicability. Moreover, we show that the RL agent trained on uniform traffic is able to generalize well to a realistic nonuniform traffic distribution not seen during training, thus outperforming the heuristics for this traffic. Visualization of the learned RWA policy reveals an RWA strategy that differs significantly from those of the heuristic baselines in terms of the distribution of services across channels and the distribution across links.
Read full abstract