Abstract
The virtual testing and validation of advanced driver assistance system and automated driving (ADAS/AD) functions require efficient and realistic perception sensor models. In particular, the limitations and measurement errors of real perception sensors need to be simulated realistically in order to generate useful sensor data for the ADAS/AD function under test. In this paper, a novel sensor modeling approach for automotive perception sensors is introduced. The novel approach combines kernel density estimation with regression modeling and puts the main focus on the position measurement errors. The modeling approach is designed for any automotive perception sensor that provides position estimations at the object level. To demonstrate and evaluate the new approach, a common state-of-the-art automotive camera (Mobileye 630) was considered. Both sensor measurements (Mobileye position estimations) and ground-truth data (DGPS positions of all attending vehicles) were collected during a large measurement campaign on a Hungarian highway to support the development and experimental validation of the new approach. The quality of the model was tested and compared to reference measurements, leading to a pointwise position error of in the lateral and in the longitudinal direction. Additionally, the modeling of the natural scattering of the sensor model output was satisfying. In particular, the deviations of the position measurements were well modeled with this approach.
Highlights
According to the World Health Organization, more than 1.35 million people die in road traffic crashes each year, and up to 50 million are injured or become disabled
A significant element of ADAS/AD function development is the collection of measurement data, which are typically utilized in both training and validating the AI-based perception algorithms and the control algorithms utilizing them
The test data for this evaluation were from a detected object from the measurement campaign in Section 3 that is excluded from the training data
Summary
According to the World Health Organization, more than 1.35 million people die in road traffic crashes each year, and up to 50 million are injured or become disabled. Systems capable of SAE Level-3 “conditional driving automation” take over object and event detection and responses This implies that the driver can take his/her eyes off the road and is only required to intervene when the system requests this. The effort to approve SAE Level-3+ vehicles, that will use cameras together with other perception sensors to support AD functions, will increase significantly since the responsibility of the environment perception is shifted from the driver to the system. Reducing the development effort for ADAS functions and eventually enabling AD functions demand the extension of conventional test methods, e.g., physical test drives, with simulations in virtual test environments [4,12], or mixed methods combining the both testing abstraction levels [13,14,15,16] In such a virtual test environment, a camera is simulated by a sensor model. Sensor model output object list ADAS/AD function modified position virtual test environment
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.