Abstract
Monocular 3D object detection aims at achieving prediction from two-dimensional image plane to three-dimensional physical world. It is an inevitable problem that occlusion phenomena limit the performance in practice. To solve the challenging problem that directly represents the spatial information of occlusion relation, we propose the visibility states of points to describe the spatial distance relationships of occlusion pairs and the implied orientation information. The visibility state introduction can better represent the level and direction of occlusion information and enhance the network’s understanding of occlusion information. Furthermore, we redesign an end-to-end detector to encode features of visibility states to integrate occlusion ordering cues of the whole image to assist object localization in world space. Experiments on the KITTI3D dataset indicate that our method succeeds in establishing visibility states as occlusion cues and promoting the performance of the original detector. Our method is effective, and the performance is comparable with state-of-the-art approaches, especially outstanding in Moderate and Hard cases. Specifically, our method improves the accuracy of 3D moderate case detection to 42.75% and hard case to 37.03% in the KITTI3D dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.