Abstract

With the great development of the apparel e-commerce industry, more and more people are engaged in this field. In reality, clothing is easily affected by background, perspective, occlusion, and other factors, so clothing segmentation is not accurate enough. This paper proposes a clothing segmentation network based on the texture and semantic decoding module of clothing. Based on the swing transformer network, this network proposes a clothing texture analysis module (CTAM) and a clothing semantic decoding module (CSDM). In the coding process, CTAM extracts clothing texture features to enhance spatial information, while CSDM uses mixed attention to enhance image context information during decoding to improve the accuracy of clothing segmentation. The experimental results show that compared with the original model, the average pixel accuracy of the proposed method on the LIP data set is increased by about 2%, and the average crossover ratio is increased by about 1.5%. The segmentation network significantly improves indistinguishable garment boundaries and the accuracy of garment image segmentation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call