Abstract

To date, the users of test-driven program repair tools suffer from the overfitting problem; a generated patch may pass all available tests without being correct. In the existing work, users are treated as merely passive consumers of the tests. However, what if they are willing to modify the test to better assess the patches obtained from a repair tool? In this work, we propose a novel semi-automatic patch-classification methodology named Poracle . Our key contributions are three-fold. First, we design a novel lightweight specification method that reuses the existing test. Specifically, the users extend the existing failing test with a preservation condition —the condition under which the patched and pre-patched versions should produce the same output. Second, we develop a fuzzer that performs differential fuzzing with a test containing a preservation condition. Once we find an input that satisfies a specified preservation condition but produces different outputs between the patched and pre-patched versions, we classify the patch as incorrect with high confidence. We show that our approach is more effective than the four state-of-the-art patch classification approaches. Last, we show through a user study that the users find our semi-automatic patch assessment method more effective and preferable than the manual assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call