Abstract

Generative modeling is a type of machine learning that is used to model the distribution from which that a given set of data (e.g. images, audio) came. Normally, this is an unsupervised problem, in the sense that the models are trained on a large collection of data. When trained successfully, we can use the generative models to estimate the likelihood of each observation and to create new samples from the underlying distribution. There are various deep generative models in widespread use today, such as Generative Adversarial Networks (GANs), variational autoencoders, flow-based models, and diffusion models. A GAN is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset. GANs are a clever way of training a generative model by framing the problem as a supervised learning problem with two submodels: the generator model that we train to generate new examples, and the discriminator model that tries to classify examples as either real (from the domain) or fake (generated). The two models are trained together in a zero-sum game that is adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples. In this chapter, we will introduce the structure and the formulation of GANs. In addition, we will also show several case studies to present the application scenarios of GANs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call