Abstract

In this paper, we address an important and practical problem of clothing cosegmentation (CCS): given multiple fashion model photos with natural backgrounds on e-commerce websites, to automatically and simultaneously segment all images and extract the clothing regions. However, cluttered backgrounds, variations in colors and styles, and inconsistent human poses all make it a challenging task. In this paper, a novel CCS algorithm is proposed to improve the accuracy of clothing extraction by exploiting the properties of multiple clothing images with the same apparel. First, the co-salient objects are computed by detecting the upper bodies of fashion models and transferring their locations within multiple images. Based on the coarse clothing regions determined by the upper body localization and co-salient object detection, the foreground (clothing) and background Gaussian mixture models are estimated, respectively. Finally, the clothing region in each image is extracted through energy minimization based on graph cuts iteratively. The proposed cosegmentation algorithm is mainly designed for multiple clothing images. As a byproduct, it can also be applied to single image segmentation without any modification. The experiments demonstrate that the proposed approach outperforms the state-of-the-art cosegmentation methods as well as traditional single image segmentation solution for shopping images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call