Abstract

Intrusion Detection Systems (IDS) play an important role in detecting network intrusions. Because intrusions have many variants and zero-day attacks, traditional signature- and anomaly-based IDS often fail to detect them. On the other hand, solutions based on Machine Learning (ML), have better capabilities for detecting variants. In this work, we adopt an ML-based IDS which uses three in-sequence tasks, pre-processing, binary detection, and multi-class detection, with a multi-tier architecture with one-, two-, and three-tier architectural configurations. We then mapped three in-sequence tasks into these architectures, resulting in ten task assignments. We evaluated these with queueing theory to determine which tasks assignments were more appropriate for particular service providers. With simulated annealing, we obtained the computation capacity by allocating the total cost appropriate to each tier, based on the fixed parameter set with the objective of minimizing overall delay. These investigations showed that using only the edge and allocating all tasks to it gave the best performance. Furthermore, a two-tier architecture with edge and cloud components was also sufficient for IDS as a Service with the delay that was three times better than for other task assignments. Our results also indicate that more than 85% of the total capacity was allocated and spread across nodes in the lowest tier for pre-processing to reduce delays.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call