Abstract

Autonomous Driving (AD) datasets, when used in combination with deep learning techniques, have enabled significant progress on difficult AD tasks such as perception, trajectory prediction, and motion planning. These datasets represent the content of driving scenes as captured by various sensors, including cameras, RADAR, and LiDAR, along with 2D/3D annotations of traffic participants. Such datasets, however, often fail to capture and to represent the spatial, temporal, and semantic relations between entities in a scene. This lack of knowledge leads to a shallow understanding of the true complexity and dynamics inherent in a driving scene. In this paper, we argue that a Knowledge Graph (KG)-based representation of driving scenes, that provides a richer structure and semantics, will lead to further improvements in AD. Towards this goal, we developed a layered architecture and ontologies for specific AD datasets and a fundamental ontology of shared concepts. We also built KGs for three different AD datasets. We perform an analysis with respect to information contained in the AD KGs and outline how the additional semantic information contained in the KGs could improve the performance of different AD tasks. Moreover, example queries are provided to retrieve relevant information that can be exploited for augmenting the AD pipelines. All artifacts needed for reproducibility purposes are provided via a GitHub repository. ( https://github.com/boschresearch/dskg-constructor — Note that we removed our internal namespaces of reused ontologies, because of confidentiality and to provide self-contained ontologies. As the original datasets are under specific licences, the KGs are not published, but we provide the scripts to generate them.)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call