Abstract

The development of self-driving cars increases driving safety and accelerates urban transportation. These systems must have robust and real-time understanding of traffic conditions and surroundings, both at day and night. Many semantic image segmentation techniques have been proposed based on deep neural networks to partition the traffic scene images as a substantial step. However, the proposed algorithms and public datasets are mostly based on visible images during the daytime. Also, most of these algorithms are computationally intensive. However, little research has been done to date to address the application of the fusion of thermal and visible images and the high-performance low-volume deep convolutional networks. In this paper, a multispectral Encoder Fused Atrous Spatial Pyramid Pooling (EFASPP) U-Net deep network is proposed to merge the features of the visible and thermal images recorded at night traffic scenes. The proposed network is designed based on the structure of the U-Net, due to its high accuracy and speed of processing, as well as no need for large training datasets. The fusion of visible and thermal features in the encoders of EFASPP U-Net network is performed using standard and atrous convolution layers. Also, a new multispectral dataset is developed in this work for night-time traffic scenes due to the lack of sufficient public dataset in this field. The major contributions of this work include a low-volume high-performance multispectral semantic segmentation network for smart vehicles and a new dataset for this application. The experimental results show the high accuracy and speed of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call