Abstract

An increasing number of tasks have been developed for autonomous driving and advanced driver assistance systems. However, this gives rise to the problem of incorporating plural functionalities to be ported into a power-constrained computing device. Therefore, the objective of this work is to alleviate the complex learning procedure of the pixel-wise approach for driving scene understanding. In this paper, we go beyond the pixel-wise detection of the semantic segmentation task as a point detection task and implement it to detect free space and lane. Instead of pixel-wise learning, we trained a single deep convolution neural network for point of interest detection in a grid-based level and followed with a computer vision (CV) based post-processing of end branches corresponding to the characteristic of target classes. To achieve the corresponding final result of pixel-wise detection of semantic segmentation and parametric description of lanes, we propose a CV-based post-processing to decode points of output from the neural network. The final results showed that the network could learn the spatial relationship for point of interest, including the representative points on the contour of the free space segmented region and the representative points along the center of the road lane. We verify our method on two publicly available datasets, which achieved 98.2% mIoU on the KITTI dataset for the evaluation of free space and 97.8% accuracy on the TuSimple dataset (with the field of view below the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$y=320$ </tex-math></inline-formula> axis) for the evaluation of the lane.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.