Abstract

System administrators cope with security incidents through a variety of monitors, such as intrusion detection systems, event logs, security information and event management systems. Monitors generate large volumes of alerts that overwhelm the operations team and make forensics time-consuming. Filtering is a consolidated technique to reduce the amount of alerts. In spite of the number of filtering proposals, few studies have addressed the validation of filtering results in real production datasets. This paper analyzes a number of state-of-the-art filtering techniques that are used to address security datasets. We use 14 months of alerts generated in a SaaS Cloud. Our analysis aims to measure and compare the reduction of the alerts volume obtained by the filters. The analysis highlights pros and cons of each filter and provides insights into the practical implications of filtering as affected by the characteristics of a dataset. We complement the analysis with a method to validate the output of a filter in absence of ground truth, i.e., the knowledge of the incidents occurred in the system at the time the alerts were generated. The analysis addresses blacklist, conceptual clustering and bytes techniques, and our filtering proposal based on term weighting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call