Abstract
Recent panoptic segmentation even instance segmentation methods usually rely on the region-based method or highly-specialized combination with heuristics module, followed by post-processing techniques. While most of the recent methods neglect low-fill rate linear objects and cannot recognize pixels located in bounding box margins. We propose a branched, end-to-end trainable multi-task architecture focusing on pixel-level grouping problems for panoptic segmentation. The embedding branch regress pixels into an embedding space, so that pixels from the same group are at close range while those from different groups have a specified margin. Every pixel can be considered in an image without overlapping. And semantic branch produces best seed scores with labels as clustering center. The further-embedding branch disentangles each pixel in pixel embedding space. Thus, we are able to segment both thing and stuff classes, and explain all the pixels in the image. We obtain state-of-the-art results on Pascal VOC2012 and Cityscapes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.