Abstract

This paper presents a road detection method for autonomous driving based on an end-to-end neural network model. Our method takes advantage of both the characteristics of road boundary and multi-task learning of a deep convolutional network. By reassigning the label and rebalancing the loss of road pixels, we focus on the learning of hard examples on the boundary to refine its performance. Then, a road geometric transformation-based data augmentation method is proposed to enable the network model to be robust under traffic scenes. Based on these two novel methods, a unified architecture consisting of a shared deep residual encoder network and multi-branch decoder sub-networks is integrated. It adopts road scene classification as a supervised learning task to realize road segmentation and scene classification simultaneously. Experiments demonstrate that the proposed method has achieved the highest MaxF value in most road scenes. Both qualitative and quantitative evaluations on the KITTI-Road benchmark demonstrate our superior performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call