Abstract

Open Set Recognition (OSR) is the ability of a machine learning (ML) algorithm to classify the known and recognize the unknown. In other words, OSR enables novelty detection in classification algorithms. This broader approach is critical to detect new types of attacks, including zero-days, thereby improving the effectiveness and efficiency of various MLenabled mission-critical systems, such as cyber-physical, facial recognition, spam filtering, and cyber defense systems such as intrusion detection systems (IDS). In ML algorithms, like deep learning (DL) classifiers, hyperparameters control the learning process; their values affect other model parameters, such as weights and biases, which affect the performance of OSR algorithms. Moreover, OSR introduces additional parameters, making DL classifiers bigger and training them more computationally intensive. Determining the suitable set of hyperparameters and parameters is a computationally expensive task. Alternative OSR algorithms have demonstrated promising results on image datasets, but only limited studies have been performed in the context of IDS. This paper proposes OpenSetPerf, an empirical investigation of three prominent OSR algorithms using a current, real-world network intrusion detection systems (NIDS) benchmark dataset to discover the relationship between the DL-based OSR algorithm’s hyperparameter values and their performance. OpenSetperf evaluates these algorithms using quantitative studies with widely used ML performance evaluation metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call