Abstract
This paper presents a method for multi-view 3D robotic object recognition targeted for cluttered indoor scenes. We explicitly model occlusions that cause failures in visual detectors by learning a generative appearance-occlusion model from a training set containing annotated 3D objects, images and point clouds. A Bayesian 3D object likelihood incorporates visual information from many views as well as geometric priors for object size and position. An iterative, sampling-based inference technique determines object locations based on the model. We also contribute a novel robot-collected data set with images and point clouds from multiple views of 60 scenes, with over 600 manually annotated 3D objects accounting for over ten thousand bounding boxes. This data has been released to the community. Our results show that our system is able to robustly recognize objects in realistic scenes, significantly improving recognition performance in clutter.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.