Abstract

Artificial intelligence (AI) and machine learning (ML) continue to demonstrate substantial capabilities in solving a wide range of optical-network-related tasks such as fault management, resource allocation, and lightpath quality of transmission (QoT) estimation. However, the focus of the research community has been centered on ML models’ predictive capabilities, neglecting aspects related to models’ understanding, i.e., to interpret how the model reasons and makes its predictions. This lack of transparency hinders the understanding of a model’s behavior and prevents operators from judging, and hence trusting, the model’s decisions. To mitigate the lack of transparency and trust in ML, explainable AI (XAI) frameworks can be leveraged to explain how a model correlates input features to its outputs. In this paper, we focus on the application of XAI to lightpath QoT estimation. In particular, we exploit Shapley additive explanations (SHAP) as the XAI framework. Before presenting our analysis, we provide a brief overview of XAI and SHAP, then discuss the benefits of the application of XAI in networking and survey studies that apply XAI to networking tasks. Then, we model the lightpath QoT estimation problem as a supervised binary classification task to predict whether the value of the bit error rate associated with a lightpath is below or above a reference acceptability threshold and train an ML extreme gradient boosting model as the classifier. Finally, we demonstrate how to apply SHAP to extract insights about the model and to inspect misclassifications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call