Abstract
Lane detection plays an essential part in advanced driver-assistance systems and autonomous driving systems. However, lane detection is affected by many factors such as some challenging traffic situations. Multilane detection is also very important. To solve these problems, we proposed a lane detection method based on instance segmentation, named RS-Lane. This method is based on LaneNet and uses Split Attention proposed by ResNeSt to improve the feature representation on slender and sparse annotations like lane markings. We also use Self-Attention Distillation to enhance the feature representation capabilities of the network without adding inference time. RS-Lane can detect lanes without number limits. The tests on TuSimple and CULane datasets show that RS-Lane has achieved comparable results with SOTA and has improved in challenging traffic situations such as no line, dazzle light, and shadow. This research provides a reference for the application of lane detection in autonomous driving and advanced driver-assistance systems.
Highlights
Lane detection plays a vital role in autonomous driving
We proposed a lane detection method based on LaneNet [1] using Split Attention proposed by ResNeSt [2] and Self-Attention Distillation (SAD) [3] to improve the feature representation on the slender and sparse annotations like lane markings
E current lane detection methods can be roughly divided into two kinds: one is based on traditional computer vision and the other one is based on deep learning
Summary
Lane detection plays a vital role in autonomous driving. Reliable lane detection can help autonomous driving systems to make the right decisions. Most of the traditional detection methods rely on extracting a certain feature to detect lanes such as color features [4,5,6], edge features [7, 8], geometric features [9,10,11], and so on They are possibly combined with Hough Transform [12] and Random Sample Consensus (RANSAC) [13, 14]. Ese methods are simple and efficient, but they need to manually adjust the parameters They can perform well when working in normal situations, they cannot adapt to situations with different conditions such as lighting and occlusion
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.