Abstract

In order to improve the accuracy of semantic segmentation and the realism of the output image of the traditional virtual dressing model, the study constructs an improved virtual dressing model based on semantic segmentation network and generative adversarial network. In addition, to improve the image resolution, the study proposes to construct a super-resolution model based on the second-order degradation model. The performance comparison experiment of the proposed virtual change model shows that the semantic segmentation error of the model is 9.1%, which is lower than other contrast models. Moreover, the structural similarity of the output image of the virtual change model is 0.87, higher than that of other contrast models. The model can retain the details of the semantics of human body and clothing. In addition, the study also conducted empirical analysis on the super-resolution model, and found that the occurrence frequency of overshoot artifact in the output pictures of this model was 12.6%, which is higher than other contrast models, and the evaluable image quality value was 3.11, which is lower than other contrast models. In summary, the results show that the semantic error performance and authenticity of the proposed improved virtual changing model are better than that of other comparison models, while the noise of the generated images of the super-resolution model is less and fuzzy, and its quality is better than that of other comparison models. The research can provide some theoretical reference for promoting the application of virtual simulation technology in the apparel industry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call