Abstract
We present a unified occlusion model for object instance detection under arbitrary viewpoint. Whereas previous approaches primarily modeled local coherency of occlusions or attempted to learn the structure of occlusions from data, we propose to explicitly model occlusions by reasoning about 3D interactions of objects. Our approach accurately represents occlusions under arbitrary viewpoint without requiring additional training data, which can often be difficult to obtain. We validate our model by incorporating occlusion reasoning with the state-of-the-art LINE2D and Gradient Network methods for object instance detection and demonstrate significant improvement in recognizing texture-less objects under severe occlusions.
Highlights
Occlusions are common in real world scenes and are a major obstacle to robust object detection
Once the prior probability drops below some level λ, the one would only detect objects under multiple views, it is important to tease apart the effect of occlusion from the effect of viewpoint
We will refer to this system as robust LINE2D
Summary
Occlusions are common in real world scenes and are a major obstacle to robust object detection. Researchers have shown in the past that incorporating 3D geometric understanding of scenes [1, 9] improves the performance of object detection systems Following these approaches, we propose to reason about occlusions by explicitly modeling 3D interactions of objects. We incorporate occlusion reasoning with object detection by: (1) a bottom-up stage which hypothesizes the likelihood of occluded regions from the image data, followed by (2) a top-down stage which uses prior knowledge represented by the occlusion model to score the plausibility of the occluded regions. The focus of this paper is to demonstrate that a relatively simple model of 3D interaction of objects can be used to represent occlusions effectively for instance detection of texture-less objects under arbitrary view. We evaluate our approach by extending the state-of-the-art LINE2D [7] system, and demonstrate significant improvement in detection performance on a challenging occlusion dataset
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on pattern analysis and machine intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.