Abstract

In intelligent driving, lane line detection is a basic but challenging task, especially in complex road conditions. The current detection algorithms based on convolutional neural networks perform well for simple scenes with plenty of light, and the lane lines are clean and unobstructed. Still, they do not perform well for complex scenes such as damaged, blocked, and lack-of-light scenes. In this article, we have exceeded the above restrictions and propose an attractive network: LaneFormer; We use an end-to-end network for up and down sampling three times each, then fuse them in their respective channels to extract the slender lane line structure. At the same time, a correction module is designed to adjust the dimensions of the extracted features using MLP, judging whether the feature is completely extracted through the loss function. Finally, we send the feature into the transformer network, detect the lane line points through the attention mechanism, and design a road and camera model to fit the identified lane line feature points. Our proposed method has been validated in the TuSimple benchmark test, showing the most advanced accuracy with the lightest model and fastest speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.