Abstract

Many approaches for testing automated and autonomous driving systems in dynamic traffic scenarios rely on the reuse of test cases, e.g., recording test scenarios during real test drives or creating “test catalogs.” Both are widely used in industry and in literature. By counterexample, we show that the quality of test cases is system-dependent and that faulty system behavior may stay unrevealed during testing if test cases are naïvely re-used. We argue that, in general, system-specific “good” test cases need to be generated. Thus, recorded scenarios in general cannot simply be used for testing, and regression testing strategies needs to be rethought for automated and autonomous driving systems. The counterexample involves a system built according to state-of-the-art literature, which is tested in a traffic scenario using a high-fidelity physical simulation tool. Test scenarios are generated using standard techniques from the literature and state-of-the-art methodologies. By comparing the quality of test cases, we argue against a naïve re-use of test cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call