Abstract

The virtual testing and validation of advanced driver assistance system and automated driving (ADAS/AD) functions require efficient and realistic perception sensor models. In particular, the limitations and measurement errors of real perception sensors need to be simulated realistically in order to generate useful sensor data for the ADAS/AD function under test. In this paper, a novel sensor modeling approach for automotive perception sensors is introduced. The novel approach combines kernel density estimation with regression modeling and puts the main focus on the position measurement errors. The modeling approach is designed for any automotive perception sensor that provides position estimations at the object level. To demonstrate and evaluate the new approach, a common state-of-the-art automotive camera (Mobileye 630) was considered. Both sensor measurements (Mobileye position estimations) and ground-truth data (DGPS positions of all attending vehicles) were collected during a large measurement campaign on a Hungarian highway to support the development and experimental validation of the new approach. The quality of the model was tested and compared to reference measurements, leading to a pointwise position error of in the lateral and in the longitudinal direction. Additionally, the modeling of the natural scattering of the sensor model output was satisfying. In particular, the deviations of the position measurements were well modeled with this approach.

Highlights

  • According to the World Health Organization, more than 1.35 million people die in road traffic crashes each year, and up to 50 million are injured or become disabled

  • A significant element of ADAS/AD function development is the collection of measurement data, which are typically utilized in both training and validating the AI-based perception algorithms and the control algorithms utilizing them

  • The test data for this evaluation were from a detected object from the measurement campaign in Section 3 that is excluded from the training data

Read more

Summary

Introduction

According to the World Health Organization, more than 1.35 million people die in road traffic crashes each year, and up to 50 million are injured or become disabled. Systems capable of SAE Level-3 “conditional driving automation” take over object and event detection and responses This implies that the driver can take his/her eyes off the road and is only required to intervene when the system requests this. The effort to approve SAE Level-3+ vehicles, that will use cameras together with other perception sensors to support AD functions, will increase significantly since the responsibility of the environment perception is shifted from the driver to the system. Reducing the development effort for ADAS functions and eventually enabling AD functions demand the extension of conventional test methods, e.g., physical test drives, with simulations in virtual test environments [4,12], or mixed methods combining the both testing abstraction levels [13,14,15,16] In such a virtual test environment, a camera is simulated by a sensor model. Sensor model output object list ADAS/AD function modified position virtual test environment

Previous Work on Automotive Camera Modeling
Datasets for Automotive Camera Sensors
Scope of Work
Structure of the Article
Object-List-Based Sensor Model
Kernel Density Estimation: A Short Introduction
Sensor Model Development
Validation Data
Campaign Description
Test Setup and Measurement Hardware
Scenario Descriptions
Sensor Models
Results
Summary and Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call