Abstract

Crash reproduction approaches help developers during debugging by generating a test case that reproduces a given crash. Several solutions have been proposed to automate this task. However, the proposed solutions have been evaluated on a limited number of projects, making comparison difficult. In this paper, we enhance this line of research by proposing JCrashPack, an extensible benchmark for Java crash reproduction, together with ExRunner, a tool to simply and systematically run evaluations. JCrashPack contains 200 stack traces from various Java projects, including industrial open source ones, on which we run an extensive evaluation of EvoCrash, the state-of-the-art tool for search-based crash reproduction. EvoCrash successfully reproduced 43% of the crashes. Furthermore, we observed that reproducing NullPointerException, IllegalArgumentException, and IllegalStateException is relatively easier than reproducing ClassCastException, ArrayIndexOutOfBoundsException and StringIndexOutOfBoundsException. Our results include a detailed manual analysis of EvoCrash outputs, from which we derive 14 current challenges for crash reproduction, among which the generation of input data and the handling of abstract and anonymous classes are the most frequents. Finally, based on those challenges, we discuss future research directions for search-based crash reproduction for Java.

Highlights

  • Software crashes commonly occur in operating environments and are reported to developers for inspection

  • Crash reproduction approaches can be divided into three categories, based on the kind of data used for crash reproduction: record-replay approaches record data from the running program; post-failure approaches collect data from the crash, like a memory dump; and stack-trace based post-failure use only the stack trace produced by the crash

  • This paper sets out to create a benchmark of Java crashes, that can be reused for experimental purposes

Read more

Summary

Introduction

Software crashes commonly occur in operating environments and are reported to developers for inspection. To help developers in this process, various automated techniques have been suggested These techniques typically either use program runtime data (Artzi et al 2008; Clause and Orso 2007; Narayanasamy et al 2005; Steven et al 2000; Gomez et al 2016; Bell et al 2013; Cao et al 2014; Roßler et al 2013) or crash stack traces (Bianchi et al 2017; Soltani et al 2017; Nayrolles et al 2017; Xuan et al 2015; Chen and Kim 2015) to generate a test case that triggers the reported crash. Tools like ReCrash (Artzi et al 2008), ADDA (Clause and Orso 2007), Bugnet (Narayanasamy et al 2005), jRapture (Steven et al 2000), MoTiF (Gomez et al 2016), Chronicler (Bell et al 2013), and SymCrash (Cao et al 2014) fall in this category

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.