Abstract

The high bandwidth and low latency of 6G network technology enable the successful application of monocular 3D object detection on vehicle platforms. Monocular 3D-object-detection-based Pseudo-LiDAR is a low-cost, low-power solution compared to LiDAR solutions in the field of autonomous driving. However, this technique has some problems, i.e., (1) the poor quality of generated Pseudo-LiDAR point clouds resulting from the nonlinear error distribution of monocular depth estimation and (2) the weak representation capability of point cloud features due to the neglected global geometric structure features of point clouds existing in LiDAR-based 3D detection networks. Therefore, we proposed a Pseudo-LiDAR confidence sampling strategy and a hierarchical geometric feature extraction module for monocular 3D object detection. We first designed a point cloud confidence sampling strategy based on a 3D Gaussian distribution to assign small confidence to the points with great error in depth estimation and filter them out according to the confidence. Then, we present a hierarchical geometric feature extraction module by aggregating the local neighborhood features and a dual transformer to capture the global geometric features in the point cloud. Finally, our detection framework is based on Point-Voxel-RCNN (PV-RCNN) with high-quality Pseudo-LiDAR and enriched geometric features as input. From the experimental results, our method achieves satisfactory results in monocular 3D object detection.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.