Abstract

A new dataset for automated driving, which is the subject matter of this paper, identifies and addresses a gap in existing similar perception datasets. While most state-of-the-art perception datasets primarily focus on the provision of various onboard sensor measurements along with the semantic information under various driving conditions, the provided information is often insufficient since the object list and position data provided include unknown and time-varying errors. The current paper and the associated dataset describe the first publicly available perception measurement data that include not only the onboard sensor information from the camera, Lidar, and radar with semantically classified objects but also the high-precision ground-truth position measurements enabled by the accurate RTK-assisted GPS localization systems available on both the ego vehicle and the dynamic target objects. This paper provides insight on the capturing of the data, explicitly explaining the metadata structure and the content, as well as the potential application examples where it has been, and can potentially be, applied and implemented in relation to automated driving and environmental perception systems development, testing, and validation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call