Abstract

AbstractImage inpainting techniques have recently been developed leveraging deep neural networks and have seen many real-world applications. However, image inpainting networks, which are typically based on generative adversarial network (GAN), suffer from high parameter complexities and long inference time. While there are some efforts to compress image-to-image translation GAN, compressing image inpainting networks has rarely been explored. In this paper, we aim to create a small and efficient GAN-based inpainting model by compressing the generator of the inpainting model without sacrificing the quality of reconstructed images. We propose novel channel pruning and knowledge distillation techniques that are specialized for image inpainting models with mask information. Experimental results demonstrate that our compressed inpainting model with only one-tenth of the model size achieves similar performance to the full model.KeywordsImage inpaintingNetwork pruningKnowledge distillation

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call