Abstract

Lane detection is a critical but challenging task in autonomous driving, especially in complex scenes. In most of the complicated scenes, only a part of the scene is challenged, and the whole road surface is not involved. We believe that perception of the scene’s structure and using information of reliable areas to detect lanes in challenging areas is the key to robust lane detection. This paper proposes a novel lane detection method that can provide a reliable system even in complex situations and meet real-time requirements, called RoaDSaVe. The RoaDSaVe method consists of three major modules: scene awareness, physical inference, and validity effectiveness. The scene awareness module takes advantage of an effective combination of shallow and deep features in a neural network, resulting in a more accurate acquisition of spatial and semantic feature maps. Following that, an optimization method is used to obtain the physical parameters of the lane lines based on road and lane semantic labels. The physical inference module creates an adjusted histogram and considers locations with high scores and reliability to be valid lines. In the validity effectiveness module, the potential points are stimulated by the valid lines. If a potential line did not achieve sufficient validity in the previous module, it can still be considered valid if it meets the required score. The experiments on several public datasets show that the RoaDSaVe lane detection method has significant ability in scenes with local challenges and achieves excellent performance compared to the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.