Most of current computer vision-based advanced driver assistance systems (ADAS) perform detection and tracking of objects quite successfully under regular conditions. However, under adverse weather and changing lighting conditions, and in complex situations with many overlapping objects, these systems are not completely reliable. The spectral reflectance of the different objects in a driving scene beyond the visible spectrum can offer additional information to increase the reliability of these systems, especially under challenging driving conditions. Furthermore, this information may be significant enough to develop vision systems that allow for a better understanding and interpretation of the whole driving scene. In this work we explore the use of snapshot, video-rate hyperspectral imaging (HSI) cameras in ADAS on the assumption that the near infrared (NIR) spectral reflectance of different materials can help to better segment the objects in real driving scenarios. To do this, we have used the HSI-Drive 1.1 dataset to perform various experiments on spectral classification algorithms. However, the information retrieval of hyperspectral recordings in natural outdoor scenarios is challenging, mainly because of deficient color constancy and other inherent shortcomings of current snapshot HSI technology, which poses some limitations to the development of pure spectral classifiers. In consequence, in this work we analyze to what extent the spatial features codified by standard, tiny fully convolutional network (FCN) models can improve the performance of HSI segmentation systems for ADAS applications. In order to be realistic from an engineering viewpoint, this research is focused on the development of a feasible HSI segmentation system for ADAS, which implies considering implementation constraints and latency specifications throughout the algorithmic development process. For this reason, it is of particular importance to include the study of the raw image preprocessing stage into the data processing pipeline. Accordingly, this paper describes the development and deployment of a complete machine learning-based HSI segmentation system for ADAS, including the characterization of its performance on different embedded computing platforms, including a single board computer, an embedded GPU SoC and a programmable system on chip (PSoC) with embedded FPGA. We verify the superiority of the FPGA-PSoC over the GPU-SoC in terms of energy consumption and, particularly, processing latency, and demonstrate that it is feasible to achieve segmentation speeds within the range of ADAS industry specifications using standard development tools.
Read full abstract