Abstract

As a significant composition of art, fine art painting is becoming a research hotspot in machine learning community. With unique aesthetic value, paintings have quite different representations from natural images, making them irreplaceable. Meanwhile, the lack of training data is common in painting-related machine learning tasks. Therefore, the synthesis of fine art painting is meaningful and challenging work. There are two main types of generative models for image synthesis: generative adversarial networks (GANs) and likelihood-based models. GAN-based models can obtain high-quality samples but usually sacrifice diversity and training stability. Diffusion models are a class of likelihood-based models and have recently been shown to achieve state-of-the-art quality on the image synthesis tasks. In this paper, we explore generating fine art paintings by using diffusion models. We carried out the experiments on the partial impression paintings from the Wikiart dataset. The results demonstrate that the diffusion model can generate high-quality samples, and it is easy to train to cover more target distribution than the GAN-based methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.