Abstract

The security of cloud networks is heavily contingent upon their ability to detect incoming attacks. An Intrusion Detection System (IDS) monitors a network for precisely this purpose. IDSs fall into one of two categories: signature-based and anomaly-based IDSs. Whereas signature-based IDSs rely upon pre-programmed matching rules designed by security experts and are therefore limited to pre-existing attacks in their coverage, anomaly-based IDSs attempt to identify normal and abnormal traffic, generally using machine learning, and therefore hold the promise of being able to identify novel attacks. Anomaly-based IDSs can be divided into IDSs that are trained online and IDSs that are trained offline. While IDSs that are trained online allow greater flexibility, such IDSs could be trained by an adversary to allow specific attacks. This work-in-progress paper proposes a methodology for protecting against the mistraining of an IDS trained online. Two IDSs begin with identical rule sets, but one is allowed to adjust its data to include online data while the other remains static. Both systems can report anomalies, and if the online IDS attempts to let through too much that the offline IDS does not, the decision boundaries of the online IDS are adjusted as a safeguard against mistraining. An experiment for testing the approach is proposed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call