Abstract

Monocular three-dimensional (3D) object detection (OD) is an essential and challenging task in the domain of autonomous driving. Modern convolution neural network-based architectures for OD heavily rely on data augmentation (DA) and self-supervised learning (SSL). However, they have been relatively less explored for monocular 3D OD, especially in the field of autonomous driving. DAs for two-dimensional OD techniques do not directly extend to the 3D objects. Literature shows that this requires adaptation of the 3D geometry of the input scene and synthesis of new viewpoints. This requires accurate depth information of the scene which may not be available always. We propose augmentations for monocular 3D OD without creating view synthesis. The proposed method uses DA with SSL approach via multiobject labeling as the pretext task. We evaluate the proposed DA-SSL approach on RTM3D detection model (baseline), with and without the application of DA. The results demonstrate improvements between 2% and 3% in mAP 3D and 0.9% to 1.5% BEV scores using SSL over the baseline scores. We propose an inverse class frequency weighted (ICFW) mAP score that highlights improvements in detection for low-frequency classes in a class imbalanced datasets with long tails. We observe improvements in both ICFW mAP 3D and Bird’s Eye View (BEV) scores to take into account the class imbalance in the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) validation dataset. We achieve 4% to 5% increase in ICFW metrics with the pretext task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call