Abstract

With the great development of the apparel e-commerce industry, more and more people are engaged in this field. In reality, clothing is easily affected by background, perspective, occlusion, and other factors, so clothing segmentation is not accurate enough. This paper proposes a clothing segmentation network based on the texture and semantic decoding module of clothing. Based on the swing transformer network, this network proposes a clothing texture analysis module (CTAM) and a clothing semantic decoding module (CSDM). In the coding process, CTAM extracts clothing texture features to enhance spatial information, while CSDM uses mixed attention to enhance image context information during decoding to improve the accuracy of clothing segmentation. The experimental results show that compared with the original model, the average pixel accuracy of the proposed method on the LIP data set is increased by about 2%, and the average crossover ratio is increased by about 1.5%. The segmentation network significantly improves indistinguishable garment boundaries and the accuracy of garment image segmentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.