Abstract

Intrusion Detection Systems (IDS) increasingly leverage machine learning (ML) to enhance the detection of zero-day attacks. As operational complexities increase, enterprises are turning to Intrusion Detection as a Service (IDaS), requiring advanced solutions for efficient ML model selection and resource allocation. Existing research often focuses primarily on accuracy and computational efficiency, leaving a gap in solutions that can dynamically adapt. This study introduces a novel integrated solution, Auto-IDaS, which employs advanced Reinforcement Learning (RL) techniques for real-time, adaptive management of IDS. Auto-IDaS uses the Deep Q-Network (DQN) algorithm for dynamic ML model selection, automatically adjusting configurations of IDaS in response to fluctuating network traffic conditions. Simultaneously, it utilizes the Twin Delayed Deep Deterministic (TD3) algorithm for optimizing capacity allocation, aiming to minimize computational costs while maintaining service quality. This dual approach is innovative in its use of RL to address both selection and allocation challenges within IDaS frameworks. The effectiveness of TD3 is compared against Simulated Annealing (SA), a traditional optimization technique. The results demonstrate that utilizing DQN to dynamically select the model significantly improves the reward by 0.29% to 27.04%, effectively balancing detection performance (F1 score), detection time, and computation cost. Regarding capacity allocation, TD3 accelerates decision times approximately 5×106 times faster than SA while retaining decision quality within a 10% range comparable to SA’s performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call