Abstract

Neural networks were invented to classify signals. However, networks can also be used to generate signals. This chapter introduces generative networks, and shows that discriminative networks can be combined with generative networks to produce an autoencoder. Autoencoders can be trained with self-supervised learning to provide a compact code for signals. This code can be used to reconstruct clean copies of noisy signals. With a simple modification to the loss function using information theory, an autoencoder can be used for unsupervised discovery of categories in data, providing the basis for self-supervised learning that is at the heart of Transformers. To explain this modification, this chapter reviews basic concepts from information theory including entropy, cross entropy and the Kullback-Leiblier (KL) divergence. The chapter concludes with brief presentations of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). Learning Objectives: This chapter provides students with an introduction to generative neural networks and autoencoders, covering fundamental concepts from information theory as well as well as applications such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). Mastering the material in this chapter will enable students to understand how a neural network can be trained to generate signals, and how an auto-encoder can be used for unsupervised and self-supervised learning. Students will acquire an understanding of fundamental concepts from information theory such as entropy and sparsity. Students will be able to explain how generative networks can be combined with discriminative networks to construct generative adversarial networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call