Abstract

Visual compatibility and virtual feel are critical metrics for fashion analysis yet are missing in existing fashion designs and platforms. An explicit model is much needed for implanting visual compatibility through fashion image inpainting and virtual try-on. With rapid advancements in the Computer Vision realm, the increase in creating customer experience which leads to the great potential of interest to retailers and customers. The public datasets available are very much fit for generating outfits from Generative Adversarial Networks (GANs) but the custom outfits of the users themselves lead to low accuracy levels. This work is the first step in analyzing and experimenting with the fit of custom outfits and visualizing it to the users on them which creates the great customer experience. The work analyses the need for providing visualization of custom outfits on users in the large corpora of AI in Fashion. The authors propose a novel architecture which facilitates the combining outfits provided by the retailers and visualize it on the users themselves using Neural Body Fit. This work creates a benchmark in disentangling the custom generation of cloth outfits using GANs and virtually trying it on the users to ensure a virtual-photorealistic appearance and results to create a great customer experience by using AI. Extensive experiments show the high accuracy levels on custom outfits generated by GANs but not in customized levels. This experiment creates new state-of-art results by plotting users pose for calculating the lengths of each body-part segment (hand, leg, and so forth), segmentation + NBF for accurate fitting of the cloth outfit. This paper is different from all other competitors in terms of approach for the virtual try-on for creating a new customer experience.

Highlights

  • Recent developments and breakthroughs in Computer Vision realm in Fashion space such as the implementation of Variational Autoencoders (VAEs) [1], Generative Adversarial Networks (GANs) [2], and its variants gave a path to a myriad to fashion synthesis using computer vision [3]–[5]

  • TECHNICAL APPROACH This paper gives an essence of combining two-great esteems, i.e., Human Pose and Fashion’s Virtual try-on

  • A perfect still posture is recommended to be given as input from the users. This works experimented with the possible worst-case scenarios such as stylish poses, imperfect poses as input which can be mitigated by using the Spatial De-Transformer network (SDTN) and Spatial Transformer Network (STN) which selects a region of interests automatically

Read more

Summary

INTRODUCTION

Recent developments and breakthroughs in Computer Vision realm in Fashion space such as the implementation of Variational Autoencoders (VAEs) [1], Generative Adversarial Networks (GANs) [2], and its variants gave a path to a myriad to fashion synthesis using computer vision [3]–[5]. Shion Honda [14] proposed a two-stage architecture of generating new clothes onto a person and transfer it to a different person This new clothing visualization created many interests for the users trying out with different arbitrary poses [31], [32] and GANs generated outfits. The work proposed by [14] covers the region of interest on the body with specified key points where the cloth outfits are to be appended onto [11] This approach made a wider path for GANs [7] to validate its generated outfits on the body which gave an essence of virtual try-on [14], [31], [32], [36]

DATASETS
VISUAL RECOMMENDATION AND QUERY
HUMAN SEGMENTATION
Findings
CONCLUSIONS AND FUTURE SCOPE
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call