Abstract

In most binocular 3D detection algorithms, a large number of anchor points need to be selected, which leads to the problem of slow feature extraction. To solve this problem, an anchor-guided 3D object detection algorithm for autonomous driving is proposed based on Stereo Recurrent Convolutional Neutral Network (Stereo RCNN), which is called Fast Guided Anchored Stereo RCNN (FGAS RCNN). The proposed FGAS framework is divided into two stages. In the first stage, a probability map is generated for the left and right input images to determine the foreground position. Sparse anchor points and corresponding sparse anchor boxes are generated from the prior information. Left and right anchors are used as a whole to generate a 2D preselection box. In the second stage, a Feature Pyramid Network (FPN) based on key point generation network is used to generate key points, which are combined with stereo regression to generate 3D preselected boxes. Finally, instance-level disparity estimation is proposed to solve the problem of pixel-level information loss in the original image. Instance-level disparity is combined with instance segmentation masks to improve the accuracy of center depth on the 3D bounding box. Extensive experiments on the challenging Kitti dataset and NuScences dataset show that the proposed method reduces the computational cost while maintaining a high regression rate without any depth information and prior information of position. Compared to other methods, the proposed method has higher efficiency, better robustness and stronger generalization ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call