Abstract

Vision-based semantic segmentation and obstacle detection are important perception tasks for autonomous driving. Vision-based semantic segmentation and obstacle detection are performed using separate frameworks resulting in increased computational complexity. Vision-based perception using deep learning reports state-of-the-art accuracy, but the performance is susceptible to variations in the environment. In this paper, we propose a radar and vision-based deep learning perception framework termed as the SO-Net to address the limitations of vision-based perception. The SO-Net also integrates the semantic segmentation and object detection within a single framework. The proposed SO-Net contains two input branches and two output branches. The SO-Net input branches correspond to vision and radar feature extraction branches. The output branches correspond to object detection and semantic segmentation branches. The performance of the proposed framework is validated on the Nuscenes public dataset. The results show that the SO-Net improves the accuracy of the vision-only-based perception tasks. The SO-Net also reports reduced computational complexity compared to separate semantic segmentation and object detection frameworks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call