Abstract
AbstractIn unsupervised learning, the extraction of a representational learning space is an open challenge in machine learning. Important contributions in this field are: the Variational Auto-Encoder (VAE), on a continuous latent representation, and the Vector Quantized - VAE (VQ-VAE), on a discrete latent representation. VQ-VAE is a discrete latent variable model that has been demonstrated to learn nontrivial features representations of images in unsupervised learning. It is a viable alternative to the continuous latent variable models, VAE. However, training deep discrete variable models is challenging, due to the inherent non-differentiability of the discretization operation. In this paper, we propose Capsule Vector - VAE(CV-VAE), a new model based on VQ-VAE architecture where the discrete bottleneck represented by the quantization code-book is replaced with a capsules layer. We demonstrate that the capsules can be successfully applied for the clusterization procedure reintroducing the differentiability of the bottleneck in the model. The capsule layer clusters the encoder outputs considering the agreement among capsules. The CV-VAE is trained within Generative Adversarial Paradigm (GAN), CVGAN in short. Our model is shown to perform on par with the original VQGAN, VAE in GAN. CVGAN obtains images with higher quality after few epochs of training. We present results on ImageNet, COCOStuff, and FFHQ datasets, and we compared the obtained images with results with VQGAN. The interpretability of the training process for the latent representation is significantly increased maintaining the structured bottleneck idea. This has practical benefits, for instance, in unsupervised representation learning, where a large number of capsules may lead to the disentanglement of latent representations.KeywordsVAECapsulesVQ-VAEVQGANGANComputer vision
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.