Abstract

Real-time accurate detection of three-dimensional (3D) objects is a fundamental necessity for self-driving vehicles. Most existing computer vision approaches are based on convolutional neural networks (CNNs). Although the CNN-based approaches can achieve high detection accuracy, their high energy consumption is a severe drawback. To resolve this problem, novel energy efficient approaches should be explored. Spiking neural network (SNN) is a promising candidate because it has orders-of-magnitude lower energy consumption than CNN. Unfortunately, the studying of SNN has been limited in small networks only. The application of SNN for large 3D object detection networks has remain largely open. In this paper, we integrate spiking convolutional neural network (SCNN) with temporal coding into the YOLOv2 architecture for real-time object detection. To take the advantage of spiking signals, we develop a novel data preprocessing layer that translates 3D point-cloud data into spike time data. We propose an analog circuit to implement the non-leaky integrate and fire neuron used in our SCNN, from which the energy consumption of each spike is estimated. Moreover, we present a method to calculate the network sparsity and the energy consumption of the overall network. Extensive experiments have been conducted based on the KITTI dataset, which show that the proposed network can reach competitive detection accuracy as existing approaches, yet with much lower average energy consumption. If implemented in dedicated hardware, our network could have a mean sparsity of 56.24% and extremely low total energy consumption of 0.247mJ only. Implemented in NVIDIA GTX 1080i GPU, we can achieve 35.7 fps frame rate, high enough for real-time object detection.

Highlights

  • In recent years, increased attention has been paid to point cloud data processing for autonomous driving applications because of significant improvements in automotive light detection and ranging (LiDAR) sensors, which deliver threedimensional (3D) point clouds of the environment in real time

  • 3) We provide an analog circuit to implement the nonleaky integrate and fire neuron used in our spiking convolutional neural network (SCNN), based on which the energy consumption of a spike is estimated

  • Simon et al [19] compared their proposed model, Complex-YOLO, with the first five leading models presented in Table 3 and demonstrated that their model outperformed all five in terms of running time and efficiency

Read more

Summary

INTRODUCTION

In recent years, increased attention has been paid to point cloud data processing for autonomous driving applications because of significant improvements in automotive light detection and ranging (LiDAR) sensors, which deliver threedimensional (3D) point clouds of the environment in real time. S. Zhou et al.: Deep SCNN-Based Real-Time Object Detection for Self-Driving Vehicles Using LiDAR Temporal Data developed the VoxelNet method, which can learn discriminative feature representations from point clouds and predict accurate 3D bounding boxes in an end-to-end module. Simon et al [20] presented a novel fusion (i.e., ComplexerYOLO) of neural networks that uses a state-of-the-art 3D detector and visual semantic segmentation in the field of autonomous driving The accuracy of these methods has been demonstrated with the KITTI vision benchmark dataset [3]. Using this special temporal coding method, the input data are converted into spike times directly, and this permits us to design SCNNs with energy-efficient temporal coding We use such SCNNs to replace the CNNs of the YOLOv2 architecture [17] to develop a large-scale object detection network. Simulation results demonstrate the extremely low energy consumption of our network

NETWORK ARCHITECTURE
PREPROCESSING LAYER
DETECTION LAYER
NETWORK SPARSITY FOR ENERGY EFFICIENCY
TRAINING AND EXPERIMENTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call