Abstract

In recent years, hundreds of vulnerability discovery methods have been proposed and proven to be effective (i.e., Is Effective) by discovering thousands of vulnerabilities in real-world programs. However, the quantified ability to indicate how effective (i.e., How Effective) a method is still unknown. In this paper, we perform an empirical study to understand the effectiveness of these methods better. More specifically, we prepare a dataset of 124 papers focusing on vulnerability discovery from S&P, SECURITY, CCS, and NDSS over the past ten years. These papers cover four techniques, including static analysis, dynamic analysis, concolic analysis, and fuzzing, yielding 3970 vulnerabilities, of which 954 get CVE records. Then, we extract several attributes from the paper and categorize them into five dimensions, i.e., popularity, scalability, capability, severity, and diversity, which facilitate us to compare various techniques along these dimensions statistically. Moreover, taking these attributes into account, we propose a scoring method to quantify the effectiveness of a method, thereby indicating how effective a method is. The empirical study on dimensions and effectiveness scores reveals several findings that help better understand the effectiveness of vulnerability discovery techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call