Abstract

Deep learning models have a wide number of applications including generating realistic-looking images. These models typically require lots of data, but we wanted to explore how much quality is sacrificed by using smaller amounts of data. We built several models and trained them at different dataset sizes, then we assessed the quality of the generated images with the widely used FID measure. As expected, we measured an inverse correlation of -0.7 between image quality and training set size. However, we observed that the small-training-set results had problems not detectable by this experiment. We therefore present an experimental design for a follow-up study that would further explore the lower limits of training set size. These experiments are important for bringing us closer to understanding how much data is needed to train a successful generative model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.