Abstract

With the ever-advancing field of Image Generation and Inpainting, there are more and more techniques such as the usage of AutoEncoders, Gated Convolutions, Partial convolutions and even GANs keep getting proposed. While this means better results, normally these new architecture and techniques are more resource-intensive and cannot be implemented or worked on by aspiring researchers and in more contemporary workflows. In this paper, we try to propose a better methodology for training Image Inpainting models with a limited set of training data. We use a partial convolution-based U-Net like Convolution model that G. Liu, F. A. Reda et al. [1] proposed. This is more performant than traditional autoencoders but also not as intensive as Gated/ Attention-based Convolution models. They also don't use multiple models as in the case of GANs. We optimized the input pipeline and add support for resizing and augmenting the input dataset too. We also integrated AugMix [2] for expanding and better representing an extended space. We show our approach and the performance of the model at the end of 25 Epochs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.