Abstract

Three-dimensional (3D) object detection is essential in autonomous driving. Three-dimensional (3D) Lidar sensor can capture three-dimensional objects, such as vehicles, cycles, pedestrians, and other objects on the road. Although Lidar can generate point clouds in 3D space, it still lacks the fine resolution of 2D information. Therefore, Lidar and camera fusion has gradually become a practical method for 3D object detection. Previous strategies focused on the extraction of voxel points and the fusion of feature maps. However, the biggest challenge is in extracting enough edge information to detect small objects. To solve this problem, we found that attention modules are beneficial in detecting small objects. In this work, we developed Frustum ConvNet and attention modules for the fusion of images from a camera and point clouds from a Lidar. Multilayer Perceptron (MLP) and tanh activation functions were used in the attention modules. Furthermore, the attention modules were designed on PointNet to perform multilayer edge detection for 3D object detection. Compared with a previous well-known method, Frustum ConvNet, our method achieved competitive results, with an improvement of 0.27%, 0.43%, and 0.36% in Average Precision (AP) for 3D object detection in easy, moderate, and hard cases, respectively, and an improvement of 0.21%, 0.27%, and 0.01% in AP for Bird’s Eye View (BEV) object detection in easy, moderate, and hard cases, respectively, on the KITTI detection benchmarks. Our method also obtained the best results in four cases in AP on the indoor SUN-RGBD dataset for 3D object detection.

Highlights

  • The detection of object instances in 3D sensory data has tremendous importance in many applications

  • We propose an improved attention module by adding Multilayer Perceptron (MLP)

  • Frustum-level features are obtained from the Frustum through PointNet and attention modules, which are re-formed as a 2D feature map

Read more

Summary

Introduction

The detection of object instances in 3D sensory data has tremendous importance in many applications. Three-dimensional (3D) point clouds are usually transformed into images or voxel grids [5] before PointNet [3,4] It shows good performance in 3D object detection. To solve this problem, we would like to refer to the attention modules used in 2D object detection methods. Fan et al [6] proposed a Region Proposal Network (RPN) with an attention module, enabling the detector to pay attention to objects with high resolution while perceiving the surroundings with low resolution These works inspired us to use attention modules for object detection in 3D point clouds. We propose a Frustum ConvNet with attention modules for 3D object detection, in which both images and point clouds are used.

Related Works
Attention Module in Object Detection
Activation Function in Neural Network
Frustum ConvNet
The Improved CBAM Attention Model for Point Cloud Detection
Experimental Results
Method
Conclusions and Future Works
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call