Abstract

Variational autoencoders are an important tool in the domain of generative data models and yet they remain difficult to design due to a lack of intuition as to how to best design the corresponding encoder and decoder neural networks in addition to determining the size of the latent dimension. Furthermore, there is no definitive guidance on how one should structure these networks. Designing an effective variational autoencoder typically requires fine-tuning and experimenting with different neural network architectures and latent dimensions, which, for large datasets, can be costly both in time and money. In this work we present an approach for designing variational autoencoders based on evolutionary neural architecture search. Our technique is efficient, avoiding redundant computation, and scalable. We explore how the number of epochs used during the neural architecture search affects the properties of the resulting variational autoencoders as well as study the characteristics of the learned latent manifolds. We find that evolutionary search is able to find highly performant network even when the networks are evaluated after only two epochs of training. Using this insight we are able to dramatically reduce the overall computational requirements of our neural architecture search system applied to variational autoencoders.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call