Abstract

This work aims to enable self-driving networks by tackling the lack of trust that network operators have in Machine Learning (ML) models. We assess and scrutinize the decision-making process of ML-based classifiers used to compose a self-driving network. First, we investigate and evaluate the accuracy and credibility of classifications made by ML models used to process high-level management intents. We propose a novel conversational interface (LUMI) that allows operators to use natural language to describe how the network should behave. Second, we analyze and assess the accuracy and credibility of existing ML models’ for network security and performance. We also uncover the need to reinvent how researchers apply ML to networking problems, so we propose a new ML pipeline that introduces steps to scrutinize models using techniques from the emerging field of eXplainable Artificial Intelligence (XAI). Finally, we investigate whether there is a viable method to improve the trust of operators in the decisions made by ML models that enable self-driving networks. Our investigation led us to propose a new XAI method to extract explanations from any given black-box ML model in the form of decision trees while maintaining a manageable size, which we called TRUSTEE. Our results show that ML models widely applied to solve networking problems have not been put under proper scrutiny and can easily break when put under real-world traffic. Such models, therefore, need to be corrected to fulfill their given tasks properly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call