Abstract
Previous work introduced the idea of grouping alerts at a Hamming distance of 1 to achieve lossless alert aggregation; such aggregated meta-alerts were shown to increase alert interpretability. However, a mean of 84023 daily Snort alerts were reduced to a still formidable 14099 meta-alerts. In this work, we address this limitation by investigating several approaches that all contribute towards reducing the burden on the analyst and providing timely analysis. We explore minimizing the number of both alerts and data elements by aggregating at Hamming distances greater than 1. We show how increasing bin sizes can improve aggregation rates. And we provide a new aggregation algorithm that operates up to an order of magnitude faster at Hamming distance 1. Lastly, we demonstrate the broad applicability of this approach through empirical analysis of Windows security alerts, Snort alerts, netflow records, and DNS logs. The result is a reduction in the cognitive load on analysts by minimizing the overall number of alerts and the number of data elements that need to be reviewed in order for an analyst to evaluate the set of original alerts.
Highlights
Human review of security logs is a difficult and labor-intensive process
A mean 24 hour time slice of 84023 alerts was reduced to a mean of 14099 meta-alerts by aggregating on hourly batches. Reviewing such a number of meta-alerts on a daily basis still represents a challenge despite the improvement. We address this limitation by investigating several approaches that all contribute towards reducing the cognitive load on the analyst, by both reducing the overal number of alerts and reducing the number of data elements that need to be reviewed in order for an analyst to evaluate the set of original alerts
We find that increasing batch sizes results in a significant reduction in the number of meta-alerts and a lesser reduction in the number of data elements provided to the analysts
Summary
Human review of security logs is a difficult and labor-intensive process. This is especially true in the area of intrusion detection systems (IDSs) which often suffer from extremely high false positive rates (see, e.g. [1] [2] [3] [4], among others). Human review of security logs is a difficult and labor-intensive process This is especially true in the area of intrusion detection systems (IDSs) which often suffer from extremely high false positive rates This problem is exacerbated in signature-based systems such as Snort1 [5], where broadly-written rules may trigger repeatedly on innocuous packets This large number of false positive results creates a significant workload for IDS analysts, who must sort through them in order to locate the relatively few true positives. The work of [6] mitigated this problem by providing an algorithm to aggregate Snort intrusion detection alerts with discretely-valued fields by combining those that are at most Hamming distance 1 apart This approach was shown effective in both reducing the number of resulting metaalerts that need to be reviewed by analysts and in increasing their interpretability
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Network Security & Its Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.