Abstract

KLEE is a popular dynamic symbolic execution engine, initially designed at Stanford University and now primarily developed and maintained by the Software Reliability Group at Imperial College London. KLEE has a large community spanning both academia and industry, with over 60 contributors on GitHub, over 350 subscribers on its mailing list, and over 80 participants to a recent dedicated workshop. KLEE has been used and extended by groups from many universities and companies in a variety of different areas such as high-coverage test generation, automated debugging, exploit generation, wireless sensor networks, and online gaming, among many others.

Highlights

  • KLEE is a popular testing and analysis platform, initially developed at Stanford University by Daniel Dunbar, Dawson Engler, and the first author of this paper [5], drawing inspiration from the design of EXE [7], another symbolic execution system developed at Stanford

  • KLEE is based on dynamic symbolic execution [8], a variant of symbolic execution [2,9,15] which was initially introduced in 2005 by DART [13] and EGT [6]

  • We refer the reader to the original KLEE paper [5] for a detailed description of KLEE and Dynamic symbolic execution (DSE)

Read more

Summary

Short history and impact

KLEE is a popular testing and analysis platform, initially developed at Stanford University by Daniel Dunbar, Dawson Engler, and the first author of this paper [5], drawing inspiration from the design of EXE [7], another symbolic execution system developed at Stanford. The First International KLEE Workshop on Symbolic Execution took place in 2018 at Imperial College London.. The First International KLEE Workshop on Symbolic Execution took place in 2018 at Imperial College London.3 It attracted over 80 participants from academia, industry, and government, with registration having to close early due to. Talks covered a wide range of topics related to KLEE and symbolic execution, such as scalability, usability, memory models, constraint solving, and new application domains. The schedule included both academic and industry speakers, with academic keynotes from Sarfraz Khurshid (UT Austin), Alessandro Orso (Georgia Tech), and Abhik Roychoudhury (NUS) and industry keynotes from Indradeep Ghosh (Fujitsu) and Peng Li (Baidu)

Software project and contributors
Modifications for Test-Comp 2019
Set-up and configuration for Test-Comp 2019
Test-Comp 2019 results
KLEE and the Test-Comp 2019 benchmarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call