Abstract

High-definition (HD) maps provide a complementary source of information for Advanced Driver Assistance Systems (ADAS), allowing them to better understand the vehicle’s surroundings and make more informed decisions. HD maps are also largely employed in virtual testing phases to evaluate the behavior of ADAS components under simulated conditions. With the advent of autonomous sensorized vehicles, raw machine-oriented data will be increasingly available. The proposed pipeline aims to provide a high-level semantic interpretation of raw vehicle sensory data to derive, in an automated fashion, lane-oriented HD maps of the environment. We first present RoadStarNet, a deep learning architecture designed to extract and classify road line markings from imagery data. We show how to obtain a semantic Bird’s-Eye View (BEV) mapping of the extracted road line markings by exploiting frame-by-frame localization information. Then, we present how to progress to a graph-based representation that allows modeling complex road line markings’ structures practically, as this representation can be leveraged to produce a Lanelet2 format HD map. Lastly, we experimentally evaluate the proposed approach in real-world scenarios in terms of accuracy and coverage performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.