Abstract
The market introduction of automated driving functions poses a significant challenge in terms of safety validation, primarily due to the complexity of real traffic test cases. While virtual testing has emerged as a viable solution, the accurate replication of physical perception sensors in virtual environments remains a formidable task. This study presents an evaluation method for virtual perception sensors, specifically focusing on their performance in automated driving and real-driving behaviour scenarios. Real driving data from a proving ground is collected and integrated into a multi-body simulation software, employing a specialized toolbox for seamless conversion of recorded measurements into simulation-ready data. Virtual sensor models for LIDAR, Radar, and Camera, utilizing various machine learning approaches, are implemented within the simulation alongside commercial sensor models. The output of these models is compared to real measurement data using statistical metrics, including the Chebyshev Distance, Pearson Correlation Coefficient, and Cross-Correlation Coefficient. The evaluation highlights the accuracy and performance of the machine learning-based models and the importance of employing multiple metrics that consider both correlation and offset between simulated and measured data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.