Abstract


 
 
 Online shopping using virtual try-on technology is becoming popular and widely used for digital transformation because of sustainably sourced materials and enhancing customers’ experience. For practical applicability, the process is required for two main factors: (1) accuracy and reliability, and (2) the processing time. To meet the above requirements, we propose a state-of-the-art technique for generating a user’s visualization of model costumes using only a single user portrait and basic anthropometrics. To start, this research would summarize different methods of most virtual try-on clothes approaches, including (1) Interactive simulation between the 3D models, and (2) 2D Photorealistic Generation. In spite of successfully creating the visualization and feasibility, these approaches have to face issues of their efficiency and performance. Furthermore, the complexity of input requirements and the users’ experiments are leading to difficulties in practical application and future scalability. In this regard, our study combines (1) a head-swapping technique using a face alignment model for determining, segmenting, and swapping heads with only a pair of a source and a target image as inputs (2) a photorealistic body reshape pipeline for direct resizing user visualization, and (3) an adaptive skin color models for changing user’s skin, which ensures remaining the face structure and natural. The proposed technique was evaluated quantitatively and qualitatively using three types of datasets which include: (1) VoxCeleb2, (2) Datasets from Viettel collection, and (3) Users Testing to demonstrate its feasibility and efficiency when used in real-world applications
 
 

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call