Abstract

Terrain recognition for off-road unmanned ground vehicles driven on unstructured terrains are far complex in contrast to terrain recognition for road vehicles which are driven on structured terrains. Large number of terrain classes mixed-up in unstructured terrains make it difficult to classify using convolution neural networks based on RGB images. Partly it is attributed to lack of sufficiently annotated training data for neural network, and partly it is difficult to label such a large number of object classes which shows visual similarities. Introducing additional details about the scene with hyperspectral or multispectral cameras, the scene classification can be greatly improved for annotation of training data for neural network training. Using spectral signatures of different materials, hyperspectral imaging can detect different materials in the scene. This article discusses a method to annotate RGB images and semantic segmentation for autonomous driving on unstructured terrain applications by using hyperspectral imaging. The RGB images will be generated using same hyperspectral data cube by extracting certain spectral bands in the visible light spectrum. Using semantic segmentation network ResNet18, manually annotated training data will be compared with hyperspectral method assisted annotated data by classifying terrain scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call