Abstract

The manual forensics investigation of security incidents is an opaque process that involves the collection and correlation of diverse evidence. In this work we first conduct a complex experiment to expand our understanding of forensics analysis processes. During a period of 4 weeks, we systematically investigated 200 detected security incidents about compromised hosts within a large operational network. We used data from four commonly used security sources, namely Snort alerts, reconnaissance and vulnerability scanners, blacklists, and a search engine, to manually investigate these incidents. Based on our experiment, we first evaluate the (complementary) utility of the four security data sources and surprisingly find that the search engine provided useful evidence for diagnosing many more incidents than more traditional security sources, i.e., blacklists, reconnaissance, and vulnerability reports. Based on our validation, we then identify and make publicly available a list of 165 good Snort signatures, i.e., signatures that were effective in identifying validated malware without producing false positives. In addition, we analyze the characteristics of good signatures and identify strong correlations between different signature features and their effectiveness, i.e., the number of validated incidents in which a good signature is identified. Based on our experiment, we finally introduce an IDS signature quality metric that can be exploited by security specialists to evaluate the available rulesets, prioritize the generated alerts, and facilitate the forensics analysis processes. We apply our metric to characterize the most popular Snort rulesets. Our analysis of signatures is useful not only for configuring Snort but also for establishing best practices and for teaching how to write new IDS signatures.

Highlights

  • Security analysts are overwhelmed by massive data produced by different security sources

  • We introduce a novel signature quality metric that can be used by security specialists to evaluate the available rulesets, prioritize the generated alerts, and facilitate the forensics analysis processes

  • What does a good IDS signature look like? we provide a list of Snort signatures that were found by our alert correlator that are useful for detecting confirmed malware without generating false positives

Read more

Summary

Introduction

Security analysts are overwhelmed by massive data produced by different security sources. This method is useful to identify if suspicious internal hosts frequently establish communication with known malicious domains. Search engine Apart from using intrusion detection alerts, active scans, and blacklists in order to characterize remote hosts exhibiting frequent communication with local investigated hosts, we exploit security-related information residing on the web. When a host exhibits a malicious activity, it will leave traces in different forms in publicly available sources such as DNS lists, website access logs, proxy logs, P2P tracker lists, forums, bulletins, banlists, and IRC lists To retrieve this information, we query the Google search engine using as input the IP address and the respective domain name of the local and remote hosts. In 170 out of the 200 investigated incidents, the further manual examination of IDS alerts observed near the time of a detected incident provided useful evidence for validation

Search Engine
Reconnaissance and Vulnerability reports
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call