Abstract

In the area of computer vision, image translation attracts more and more attention with the development of generative adversarial networks. However, deep generative adversarial networks(GANs) always consume much computation resource as well as deep classification models. Directly applying previous model compression methods will lead to poor performance in that image generation quietly differs from classification tasks and the training process of GANs is unstable. Our work uses knowledge distillation methods to solve the above problem which will shrink the size of the model and reduce the inference time while maintain the performance. First, with the help of knowledge distillation, we can transfer the knowledge from the teacher network to the student network which will improve the student performance. Second, we propose a small but effective strategy that can stabilize the process of training. Last but not least, our methods can be easily applied to other GANs such as GauGAN in that the design of our knowledge distillation methods only change the number of channels instead of changing the model. The results reveal the effectiveness of our proposed methods that will reduce the model size without losing generation quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.