Abstract

The rapid integration of the Internet of Artificial Intelligence and Internet of Things (AI-IoT) technologies has given rise to a pivotal element of the upcoming digital era, the Metaverse. This confluence has significantly impacted virtual learning platforms by introducing enhanced, immersive, and interactive environments for learners, educators, and institutions. However, the growing reliance on Metaverse necessitates robust cybersecurity measures to detect and mitigate cyber threats and ensure the safety of users. This paper proposes an explainable deep neural network (DNN) designed to identify and address network intrusion attacks within Metaverse learning environments. By leveraging the latest cybersecurity IoT datasets and employing an effective feature selection method, this study provides a visually interpretable, trustworthy, and quantitative explanation of the network intrusion detection system (NIDS) model using Shapley Additive exPlanations (SHAP) and local interpretable model-agnostic explanation (LIME) explainability methods. The adopted explainable DNN, capable of processing network traffic features from interconnected Metaverse devices and IoT sensors, can facilitate accurate and comprehensible remediation of anomalous from benign activities within Metaverse. The NIDS model yields a high performing accuracy of 99.9% for establishing a more secure and trustworthy metaverse learning environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call