Abstract

Multiple base fuzzers collaborate as a fuzzer combination. Fuzzer combinations have been proven to perform more robustly and efficiently when fuzzing complicated real-world programs. The efficiency of finding bugs with limited computational resources would greatly benefit from the fuzzer combinations chosen by an effective and quantitative performance evaluation. However, evaluating fuzzer combinations remains challenging due to the lack of infrastructure for collaborative fuzzing, high enough collaboration efficiency of base fuzzers, unified benchmarks, comprehensive metrics, and unified analysis methods of coverage and bugs. This prevents us from selecting efficient fuzzer combinations and thus impairs vulnerability mining on real-world targets. In this paper, we design and implement FCEVAL, the first open-source platform for evaluating fuzzer combinations. In detail, we propose a new test case-sharing policy for increasing fuzzing potential so that we can provide a more efficient running environment for fuzzer combinations and thus improve evaluation effectiveness. Also, we select a unified set of diverse benchmarks and comprehensive metrics while adopting unified independent methods of real-time coverage statistics and multiple-sanitizers-based bug analysis for evaluation fairness and quantification. In addition, we design tools and guidelines covering the whole evaluation process for usability. With the above methodologies, we first construct an infrastructure special for collaborative fuzzing as the base of FCEVAL. After comparing two test case-sharing policies on the infrastructure and choosing the promising one as a substantial part of FCEVAL, we leverage FCEVAL to evaluate fuzzer combinations for more than 40,000 CPU hours and come up with five important conclusions, including (a) an efficient test case-sharing policy improving fuzzing potential and thus evaluation effectiveness, (b) comprehensive metrics being essential, (c) 24-hour duration and 20 repetitions for evaluation being substantial, (d) independent analysis methods of code coverage and bugs deserving of extensive adoption, and (e) FCEVAL being able to evaluate fuzzer combinations effectively, fairly, comprehensively, and easily. Meanwhile, we suggest how to improve collaborative fuzzing. In addition, source codes and test data are publicly available.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.