Abstract
Video panoptic segmentation is an important but challenging task in computer vision. It not only performs panoptic segmentation of each frame, but also associates the same instance across adjacent frames. Due to the lack of temporal coherence modeling, most existing approaches often generate identity switches during instance association, and they cannot handle ambiguous segmentation boundaries caused by motion blur. To address these difficult issues, we introduce a simple yet effective Instance Motion Tendency Network (IMTNet) for video panoptic segmentation. It learns a global motion tendency map for instance association, and a hierarchical classifier for motion boundary refinement. Specifically, a Global Motion Tendency Module (GMTM) is designed to learn robust motion features from optical flows, which can directly associate each instance in the previous frame to the corresponding instance in the current frame. In addition, we propose a Motion Boundary Refinement Module (MBRM) to learn a hierarchical classifier to handle the boundary pixels of moving targets, which can effectively revise the inaccurate segmentation predictions. Experimental results on both Cityscapes and Cityscapes-VPS datasets show that our IMTNet outperforms most state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.