Abstract
In the fields of computer vision and graphics, clothing segmentation is a challenging and practical task which is typically implemented in a fine-grained semantic segmentation framework. Unlike the generic semantic segmentation task, clothing segmentation has some domain-specific properties such as diverse appearance variations, non-rigid geometry deformations, and small sample learning. To deal with these points, we propose a semantic locality-preserving segmentation model, which adaptively attaches an original clothing image with a semantically similar (e.g., appearance or pose) auxiliary exemplar by search. Through considering the interactions of the clothing image and its exemplar, more intrinsic knowledge about the locality manifold structures of clothing images is discovered to make the learning process of small sample problem more stable and tractable. Besides, we present a CNN model based on the deformable convolutions to extract the non-rigid geometry-aware features for clothing images. Furthermore, we apply our semantic locality-preserving segmentation model in both image and video cases, resulting in favorable clothing segmentation performance. Experimental results demonstrate the effectiveness of the proposed model against the state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.