Abstract

Writing static analyzers is hard due to many equivalent transformations between program source, intermediate representation and large formulas in Satisfiability Modulo Theories (SMT) format. Traditional methods such as debugger usage, instrumentation, and logging make developers concentrate on specific minor issues. At the same time, each analyzer architecture imposes a unique view on how to represent the intermediate results required for debugging. Thus, error debugging remains a concern for each static analysis researcher. In this paper, our experience debugging a work-in-progress industrial static analyzer is presented. Several most effective techniques of constructive (code generation), testing (random test case generation) and logging (log fusion and visual representation) groups are presented. Code generation helps avoid issues with the copied code, we enhance it with the verification of the code usage. Goal-driven random test case generation reduces the risks of developing a tool highly biased towards specific syntax construction use cases by producing verifiable test programs with assertions. A log fusion merges module logs and sets up cross-references between them. The visual representation module shows a combined log, presents major data structures and provides health and performance reports in the form of log fingerprints. These methods are implemented on a basis of Equid, the static analysis framework for industrial applications, and are used internally for development purposes. They are presented in the paper, studied and evaluated. The main contributions include a study of failure reasons in the author's project, a set of methods, their implementations, testing results and two case studies demonstrating the usefulness of the methods.

Highlights

  • Software engineering is ailing from quality assurance issues

  • The paper suggests an approach for increasing the visibility of issues and/or reducing the likeliness of bugs, based on code generation, log file improvements, goal-driven random test case generation, and visual representation

  • These four methods make up the author's static analysis debugging and quality assurance approach

Read more

Summary

Introduction

Many approaches are aiming at achieving runtime error absence (reviewed in Section 2), but logical correctness is not trivially reachable even if it is guaranteed statically This is often the case because the correctness is not automatically derived from internal consistency. The paper suggests an approach for increasing the visibility of issues and/or reducing the likeliness of bugs, based on code generation, log file improvements, goal-driven random test case generation, and visual representation. These four methods make up the author's static analysis debugging and quality assurance approach.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call