Abstract
We address the problem of annotation-free instance segmentation in the wild, aiming to relieve the expensive cost of manual mask annotations. Existing approaches utilize appearance cues, such as color, edge, and texture information, to generate pseudo masks for instance segmentation. However, due to the ambiguity of defining an object by visual appearance alone, these methods fail to distinguish objects from the background under complex scenes. Beyond visual cues, objects are one-piece in space and move together over time, which indicates that geometry cues, such as spatial continuity and motion consistency, are also exploitable for this problem. To directly utilize geometry cues, we propose an affinity-based paradigm for annotation-free instance segmentation. The new paradigm is called object affinity learning, a proxy task of annotation-free instance segmentation, which aims to tell whether two pixels come from the same object by learning feature representation from geometry cues. During inference, the learned object affinity could be further converted into instance segmentation masks by some graph partition algorithms. The proposed object affinity learning achieves much better instance segmentation performance than existing pseudo-mask-based methods on the large-scale Waymo Open Dataset and KITTI dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on pattern analysis and machine intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.