For autonomous vehicles, free-space detection is an essential part of visual perception. With the development of multi-modal convolutional neural networks (CNNs) in recent years, the performance of driving scene semantic segmentation algorithms has been dramatically improved. Therefore most free-space detection algorithms are developed based on multiple sensors. However, multi-modal CNNs have high data throughput and contain a large number of computationally intensive convolution calculations, limiting their feasibility for real-time applications. Field Programmable Gate Arrays (FPGAs) provide a unique combination of flexibility, performance, and low power for these problems to accommodate multi-modal data and the computational acceleration of different compression algorithms. Network lightweight methods offer great assurance for facilitating the deployment of CNNs on such resource-constrained devices. In this paper, we propose a network lightweight method for a multi-modal free-space detection algorithm. We first propose an FPGA-friendly multi-modal free-space detection lightweight network. It comprises operators that FPGA prefers and achieves a 95.54 % MaxF score on the test set of KITTI-Road free-space detection tasks and 81 ms runtime when running on 700 W GPU devices. Then we present a pruning approach for this network to reduce the number of parameters in case the complete model exceeds the FPGA chip memory. The pruning is in two parts. For the feature extractors, we propose a data-dependent filter pruner according to the principle that the low-rank feature map contains less information. To not compromise the integrity of the multi-modal information, the pruner is independent for each modality. For the segmentation decoder, we apply a channel pruning approach to remove redundant parameters. Finally, we implement our designs on an FPGA board using 8-bit quantisation, and the accelerator achieves outstanding performance. A real-time application of scene segmentation on KITTI-Road is used to evaluate our algorithm, and the model achieves a 94.39 % MaxF score and minimum 14 ms runtime on 20W FPGA devices.
Read full abstract