Abstract
Ensuring Responsible AI practices is paramount in the advancement of systems founded upon machine learning (ML) principles, particularly in sensitive domains like intrusion detection within cybersecurity. A fundamental aspect of Responsible AI is reproducibility, which guarantees the reliability and transparency of research outcomes. In this paper, we address the critical challenge of establishing reproducible for intrusion detection utilizing ML techniques. Leveraging the NSL-KDD dataset and the Edge-IIoTset, we carry out extensive experiments to evaluate the efficacy of our approach. Our study prioritizes meticulous experiment design and careful implementation setups, aligning with the principles of Responsible AI. Through rigorous experimentation and insightful discussions, we underscore the importance of reproducibility as a cornerstone in ensuring the resilience and reliability of intrusion detection systems. Our findings offer valuable insights for researchers and practitioners striving to develop Responsible AI solutions in cybersecurity and beyond. The source code is publicly accessible at https://github.com/Salma-00/Machine-Learning-for-Intrusion-Detection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.