Abstract
Sketch-based understanding is involved in human communication and cognitive development, making it essential in visual perception. A specific task in this domain is sketch-to-photo translation, where a model produces realistic images from simple drawings. To this end, large paired training datasets are commonly required, which is impractical in real applications. Thus, this work studies conditional generative models for sketch-to-photo translation, overcoming the lack of training datasets by a self-supervised approach that produces sketch-photo pairs from a target catalog. Our study shows the benefit of cycle-consistency loss and UNet architectures that, together with the proposed dataset generation, improve performance in real applications like eCommerce. Our results also reveal the weakness of conditional DDPMs for generating images resembling the input sketch, even though they achieve a high FID score.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.