Abstract

The robustness and stability of lane detection is vital for advanced driver assistance vehicle technology and even autonomous driving technology. To meet the challenges of real-time lane detection in complex traffic scenes, a simple but robust multilane detection method is proposed in this paper. The proposed method breaks down the lane detection task into two stages, that is, lane line detection algorithm based on instance segmentation and lane modeling algorithm based on adaptive perspective transform. Firstly, the lane line detection algorithm based on instance segmentation is decomposed into two tasks, and a multitask network based on MobileNet is designed. This algorithm includes two parts: lane line semantic segmentation branch and lane line Id embedding branch. The lane line semantic segmentation branch is mainly used to obtain the segmentation results of lane pixels and reconstruct the lane line binary image. The lane line Id embedding branch mainly determines which pixels belong to the same lane line, thereby classifying different lane lines into different categories and then clustering these different categories. Secondly, the adaptive perspective transformation model is adopted. In this model, the motion information is used to accurately convert the original image into a bird’s-eye view image, and then the least-squares second-order polynomial fitting is performed on the lane line pixels. Finally, experiments on the CULane dataset show that the proposed method achieved similar or better performance compared with several state-of-the-art methods, the F1 score of the proposed method in the normal test set and most challenge test sets is better than other algorithms, which verifies the effectiveness of the proposed method, and then the field experiments results show that the proposed method has good practical application value in various complex traffic scenes.

Highlights

  • In order to increase the running speed, improve the detection accuracy, and meet the requirements of real vehicle applications, the lane detection problem is decomposed into two tasks, and a multitask network based on MobileNet is designed for lane line segmentation. e multitask network includes two branches: lane line semantic segmentation branch and lane line Id embedding branch. e lane line semantic segmentation branch is mainly used to obtain the segmentation results of lane pixels and reconstruct the lane line binary image. e lane line Id embedding branch mainly determines which pixels belong to the same lane line, thereby classifying different lane lines into different categories and clustering these different categories

  • The multilane detection network based on deep learning has been able to extract lane line pixels effectively in simple traffic scenes with good weather conditions, it is still challenging for lane line detection in complex scenes. erefore, it is very important for supervised learning that the lane dataset covers a wide range of traffic scenes and has high annotation quality

  • Aiming at the difficulty of lane line detection in complex urban traffic scenes, a new method of lane line detection based on instance segmentation and adaptive perspective transformation is proposed in this paper

Read more

Summary

Introduction

Vehicle and road safety has been a key issue for the communities and governments [1]. With emerging new technologies and knowledge, advanced driver assistance systems (ADAS) have been proposed to reduce road accidents and improve vehicle safety [2]. In ADAS and even autonomous driving vehicles, the main technical bottleneck is the perception problem, which has two elements: road and lane perception and obstacle detection [3]. E robustness and stability of lane detection is vital for advanced driver assistance vehicle technology and even unmanned technology [4]. Lane detection and tracking aids in localizing the ego-vehicle motion, which is one of the very first and primary steps in most ADAS, such as lane departure warning (LDW) and lane change assistance. Lane detection is able to aid other ADAS modules such as vehicle detection and driver intention perception.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call