Abstract

Machine learning has became a popular method for intrusion detection due to self-adaption for changing situation. Limited to lack of high quality labeled instances, some researchers focused on semi-supervised learning to utilize unlabeled instances enhancing classification. But involving the unlabeled instances into learning process also introduces vulnerability: attackers can generate fake unlabeled instances to mislead the final classifier so that a few intrusions can not be detected. We show how attackers can influence the semi-supervised classifier by constructing unlabeled instances in this paper. And a possible defence method which based on active learning is proposed. Experiments show that the misleading attack can reduce the accuracy of the semi-supervised learning method and the presented defense method against the misleading attack can obtain higher accuracy than the original semi-supervised learner under the proposed attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call