Abstract

Nowadays, with increased sensor perception performance for Advanced Driver Assistance Systems (ADAS), scenario-based simulation is becoming more frequent to manage the complexity of reality in terms of cost and time. The perception system provides the basis for the vehicle guidance algorithms calculation, but the simulation of ADAS sensors is a challenging task in virtual testing. Literature reports the magnitude of relevant modelling approaches and data-driven models becoming increasingly important. A basic method is to fit the sensor output in the virtual environment with high-fidelity measurements of real-world scenarios, thus a direct relation can be established between real and synthetic sensor data. To prove the suitability of a method, it is necessary to quantify the gap between simulation and reality to determine the performance of different models. In this work, authors address this problem and visualize the gap by introducing a multi-level evaluation approach that combines Model Generalization Ability Evaluation and Case Implicit Performance Evaluation. The former directly evaluates the model’s overall performance, while the latter is used for specific cases in simulation. The study shows that this combined evaluation approach provides an in-depth framework for evaluating sensor models to make the differences apparent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call