Abstract

Cyber-Physical Systems (CPSs) play a critical role in our modern infrastructure due to their capability to connect computing resources with physical systems. As such, topics such as reliability, performance, and security of CPSs continue to receive increased attention from the research community. CPSs produce massive amounts of data, creating opportunities to use predictive Machine Learning (ML) models for performance monitoring and optimization, preventive maintenance, and threat detection. However, the “black-box” nature of complex ML models is a drawback when used in safety-critical systems such as CPSs. While explainable ML has been an active research area in recent years, much of the work has been focused on supervised learning. As CPSs rapidly produce massive amounts of unlabeled data, relying on supervised learning alone is not sufficient for data-driven decision making in CPSs. Therefore, if we are to maximize the use of ML in CPSs, it is necessary to have explainable unsupervised ML models. In this paper, we outline how unsupervised explainable ML could be used within CPSs. We review the existing work in unsupervised ML, present initial desiderata of explainable unsupervised ML for CPS, and present a Self-Organizing Maps based explainable clustering methodology which generates global and local explanations. We evaluate the fidelity of the generated explanations using feature perturbation techniques. The results show that the proposed method identifies the most important features responsible for the decision-making process of Self-organizing Maps. Further, we demonstrated that explainable Self-Organizing Maps are a strong candidate for explainable unsupervised machine learning by comparing its model capabilities and limitations with current explainable unsupervised methods.

Highlights

  • Cyber-Physical Systems (CPSs) are capable of seamlessly integrating computing and physical resources [1, 2]

  • In this paper, we investigated the need for Explainable Unsupervised Machine learning

  • We discussed the necessity for explainable unsupervised machine learning in the Cyber-Physical Systems domain, as they generate a large amount of unlabelled data at rapid speeds

Read more

Summary

INTRODUCTION

Cyber-Physical Systems (CPSs) are capable of seamlessly integrating computing and physical resources [1, 2]. For human-in-the-loop systems, humans need to understand these algorithms such that they can trust these models By addressing this question, the explainable machine learning (XAI) research area has been received a lot of attention. Supervised feature learning is unable to take advantage of the abundance of real-world unlabelled data, but it can result in biases by relying on labeled data. These limitation has gained the focus towards unsupervised ML algorithms and is predicted to be far more important in the long term [14].

BACKGROUND
CURRENT LITERATURE ON EXPLAINABLE UNSUPERVISED MACHINE LEARNING
14: FOR each neuron in SOM neighborhood do
EXPERIMENT SETUP AND RESULTS
MODEL FIDELITY
Limitations
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call