Abstract
Road markings are reflective features on roads that provide important information for safe and smooth driving. With the rise of autonomous vehicles (AV), it is necessary to represent them digitally, such as in high-definition (HD) maps generated by mobile mapping systems (MMSs). Unfortunately, MMSs are expensive, paving the way for the use of low-cost alternatives such as low-cost light detection and ranging (LiDAR) sensors. However, low-cost LiDAR sensors produce sparser point clouds than their survey-grade counterparts. This significantly reduces the capabilities of existing deep learning techniques in automatically extracting road markings, such as using convolutional neural networks (CNNs) to classify point cloud-derived imagery. A solution would be to provide a more suitable loss function to guide the CNN model during training to improve predictions. In this work, we propose a modified loss function—focal combo loss—that enhances the capability of a CNN to extract road markings from sparse point cloud-derived images in terms of accuracy, reliability, and versatility. Our results show that focal combo loss outperforms existing loss functions and CNN methods in road marking extractions in all three aspects, achieving the highest mean F1-score and the lowest uncertainty for the two distinct CNN models tested.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.