Abstract

This study presents a shallow and robust road segmentation model. The computer-aided real-time applications, like driver assistance, require real-time and accurate processing. Current studies use Deep Convolutional Neural Networks (DCNN) for road segmentation. However, DCNN requires high computational power and lots of labeled data to learn abstract features for deeper layers. The deeper the layer is, the more abstract information it tends to learn. Moreover, the prediction time of the DCNN network is an important aspect of autonomous vehicles. To overcome these issues, a Multi-feature View-based Shallow Convolutional Neural Network (MVS-CNN) is proposed that utilizes the abstract features extracted from the explicitly derived representations of the input image. Gradient information of the input image is used as additional channels to enhance the learning process of the proposed deep learning architecture. The multi-feature views are fed to a fully-connected neural network to accurately segment the road regions. The testing accuracy demonstrates that the proposed MVS-CNN achieved an improvement of 2.7% as compared to baseline CNN consisting of only RGB inputs. Furthermore, the comparison of the proposed method with the popular semantic segmentation network (SegNet) has shown that the proposed scheme performs better while being more efficient during training and evaluation. Unlike traditional segmentation techniques, which are based on the encoder-decoder architecture, the proposed MVS-CNN consists of only the encoder network. The proposed MVS-CNN has been trained and validated with two well-known datasets: the KITTI Vision Benchmark and the Cityscapes dataset. The results have been compared with the state-of-the-art deep learning architectures. The proposed MVS-CNN outperforms and shows supremacy in terms of model accuracy, processing time, and segmentation accuracy. Based on the experimental results, the proposed architecture can be considered as an efficient road segmentation architecture for autonomous vehicle systems.

Highlights

  • Autonomous driving has gained great attention of many researchers recently

  • This study focused on road segmentation; only road labels have been selected from the dataset

  • Based on the analysis presented in this sub-section, further experimentation and evaluation of the proposed Multi-feature View-based Shallow Convolutional Neural Network (MVS-CNN) architecture are carried out using all multifeature view based combinations along with input image (i.e. IRGB + Gx + Gy + gradient magnitude (GMag))

Read more

Summary

Introduction

Autonomous driving has gained great attention of many researchers recently. The key aim of Intelligent Transport Systems (ITS) is to avoid accidents while accurately guiding the vehicle through the road, along with considering traffic safety rules and avoiding obstacles in the way [1], [2]. Advancement in the field of Autonomous Vehicles (AV) has put forth enormous challenges in the automotive industry [3]. To reduce the risk of road accidents, it is necessary to accurately distinguish the road region from the other regions This will help autonomous vehicles to navigate correctly, as well as understand the situation of the surrounding environment including traffic signs [5], [6] and signals [7], [8], pedestrians [9], road lanes [10] and other vehicles on the road [11]. Mersky et al [12] have concluded that reducing the prediction time of detection/segmentation algorithm used in

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.