Abstract

Recently, beyond object detection and segmentation, high-level understanding of autonomous driving scenarios is attracting increasing attention. And traffic scene graph has become a potential way to model and represent the cognitive semantics of driving scenarios. However, it remains a challenging problem to construct traffic scene graph due to the complexity of driving scenes and the lack of benchmark datasets with various traffic entities and their relationships. In this paper, we propose a novel driving scene understanding paradigm, which can explicitly model traffic entities and the relationships between them from the view of ego-vehicle. Meanwhile, based on parallel vision and our paradigm, we propose an ego-centric Traffic Scene Graph dataset (TSG-451), which contains 451 images, 2,266 entities, and 4,272 relationships in real and artificial traffic scenarios. Furthermore, through qualitative and quantitative analysis, we validate the advancement of our proposed paradigm and dataset. Further investigation shows that our work can more comprehensively represent traffic scenarios and provide more fine-grained detailed semantics benefit to ego-vehicle’s decision-making.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call