Abstract

Clothing warping to spatially align source garments with the corresponding body parts is crucial in clothing media tasks such as virtual try-on and pose-guided person generation. Recent pioneering work has utilized flow fields with additional dimensions of freedom to simulate clothing warping flexibly. However, current methods for estimating appearance flow typically rely on calculating just the local cost volume which contains multiple noisy matching points, potentially leading to a mismatch between clothing and body parts. To address this issue, we propose a novel appearance flow estimation network(Warping-Flow) for clothing warping based on optimal features linear assignment. Specially, two remarkable contributions are made to improve feature matching precision. First, a local context feature aggregation module is proposed to enhance the semantic feature distinction of the source cloth and target pose. Second, Warping-Flow estimates a hard attention mask through the cost volume to filter irrelevant features, followed by the optimal linear assignment algorithm to normalize the cost volume to a discrete permutation matrix that explicitly models the most contributing bipartite matches. Experiments conducted on the VITON and VITON-HD datasets demonstrate that Warping-Flow outperforms existing state-of-the-art algorithms, particularly in cases involving complex clothing deformation. Furthermore, Warping-Flow can serve as a plug-in to improve existing garment media technologies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.