Abstract

3D object detection is playing a key role in the perception process of autonomous driving and industrial robots automation. Inherent characteristics of point cloud raise an enormous challenge to both spatial representation and association analysis. Unordered point cloud spatial data structure and density variations caused by gradually varying distances to LiDAR make accurate and robust 3D object detection even more difficult. In this paper, we present a novel transformer network POAT-Net for 3D point cloud object detection. Transformer is credited with the great success in Natural Language Processing (NLP) and exhibiting inspiring potentials in point cloud processing. Our method POAT-Net is inherently insensitive to element permutations within the unordered point cloud. The associations between local points contribute significantly to 3D object detection or other 3D tasks. Parallel offset-attention is leveraged to highlight and capture subtle associations between local points. To overcome the non-uniform density distribution of different objects, we exploit Normalized multi-resolution Grouping (NMRG) strategy to enhance the non-uniform density adaptive ability for POAT-Net. Quantitative experimental results on KITTI3D dataset demonstrate our method achieves the state-of-the-art performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.