Abstract

One remaining challenge for Automated Driving (AD) that remains unclear to this day is its assessment for market release. The application of previous strategies derived from the V-model is infeasible due to the vast amount of required real-road testing to prove safety with an acceptable significance. A full set of requirements covering all possible traffic scenarios for testing and AD system can still not be derived to this day. Several approaches address this issue by either improving the set of test cases or by including other virtual test domains in the assessment process. However, all rely on simulations that can not be validated as a whole and therefore not be used for proving safety. This work addresses this issue and exhibits a method to verify the use of simulation in a scenario-based assessment process. By introducing a pipeline for reprocessing real-world scenarios as test cases we demonstrate where errors emerge and how these can be isolated. We unveil an issue in simulation which may cause behavior changes of the AD function in resimulation and thus makes the straight forward use of simulation in the assessment process impossible. A solution promising to minimize reprocessing errors and to avoid this behavior change is presented. Finally, this enables the local variation of realworld driving tests in a solely simulative context yielding verified and usable results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call