Abstract

Lane boundary detection is a key technology for self-driving cars. In this paper, we propose a spatiotemporal, deep learning based lane boundary detection method that can accurately detect lane boundaries under complex weather conditions and traffic scenarios in real time. Our algorithm consists of three parts: (i) inverse perspective transform and lane boundary position estimation using the spatial and temporal constraints of lane boundaries, (ii) convolutional neural networks (CNN) based boundary type classification and position regression, (iii) optimization and lane fitting. Our algorithm is designed to accurately detect lane boundaries and classify line types under a variety of environment conditions in real time. We tested our proposed algorithm on three open- source datasets and also compared the results with other state-of-the-art methods. Experimental results showed that our algorithm achieved high accuracy and robustness for detecting lane boundaries in a variety of scenarios in real time. Besides, we also realized the application of our algorithm on embedded platforms and verified the algorithm’s real-time performance on real self-driving cars.

Highlights

  • In recent years, autonomous driving has received widespread attention

  • Since we did the boundary type classification and lane boundary regression simultaneously, only samples with the correct classification and regression results will be set as True Positive (TP)

  • Road Vehicle Dataset (RVD): Since RVD contains images collected from different weather conditions and traffic scenarios, we can evaluate our methods’ robustness on it

Read more

Summary

Introduction

Autonomous driving has received widespread attention. Identifying traffic scenes around autonomous vehicles is a key step in autonomous driving. Lane detection has been performed by using hand-crafted features like gradient information and intensity change information [9], [13], [17] These algorithms achieve high accuracy on highways with well-lit scenes while having a comparatively bad performance on finding lane markings when lane boundaries are obscured or the experiments are run in bad weather conditions. This is because that lane boundaries have different features in different traffic scenarios, while the hand-crafted features cannot meet the requirements of multiple scenes at the same time. In order to reduce the computational complexity, we combine the constraints of spatial and temporal information of lanes to reduce the scale of CNN network structure and to ensure that our algorithm can run accurately and robustly on a self-driving platform in real time

Related Work
Spatial and Temporal Based Lane Boundary Detection
Spatial-temporal Based Lane Boundary Position Estimation in Top View
CNN for boundary type classification and lane boundary regression
Lane Fitting and Optimization
Experiments
Model Training
Evaluation
Implementation on Embedded System
Conclusion and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.