Abstract
Edge computing, as a new computing model, is facing new challenges in network security while developing rapidly. Due to the limited performance of edge nodes, the distributed intrusion detection system (DIDS), which relies on high-performance devices in cloud computing, needs to be improved to low load to detect packets nearby the network edge. This paper proposes a low load DIDS task scheduling method based on Q-Learning algorithm in reinforcement learning, which can dynamically adjust scheduling strategies according to network changes in the edge computing environment to keep the overall load of DIDS at a low level, while maintaining a balance between the two contradictory indicators of low load and packet loss rate. Simulation experiments show that the proposed method has better low-load performance than other scheduling methods, and indicators such as malicious feature detection rate are not significantly reduced.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.