Abstract

Multilane detection system is a vital prerequisite for realizing higher ADAS functionality of autonomous navigation. In this work, we present an efficient convolutional neural network (CNN) architecture for real time detection of multiple lane boundaries using a camera sensor. Our network has a simple encoder-decoder architecture and is a special two class semantic segmentation network designed to segment lane boundaries. Efficacy of our network stems from two key insights which are at the foundation of all our design decisions. Firstly, we term a lane boundary as a weak class object in the context of semantic segmentation. We show that the weak class objects which occupy relatively few pixels in the scene, also have a relatively low detection accuracy among the know segmentation methods. We present novel design choices and intuitions to improve the segmentation accuracy of weak class objects, which in turn reduces computation time. Our second insight lies in the manner we depict the ground truth information in our derived dataset. Instead of annotating just the visible lane markers, we accurately delineate the lane boundaries in the ground truth for challenging scenarios like occlusions, low light and degraded lane markings. We then leverage the CNN's ability to concisely summarize the global and local context in an image, for accurately inferring lane boundaries in these challenging cases. We evaluate our network against ENet and FCN-8, and found it performing notably better in terms of speed and accuracy. Our network achieves an encouraging 46 FPS performance on NVIDIA Drive PX2 platform and it has been validated on our test vehicle in highway driving conditions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.