Abstract

A noticeable trend of machine learning is to deal with data of various modalities. Besides multimodal motivation, learning more from general information without forgetting the prior, or incremental learning, could also benefit unimodal machine learning. Human understanding often starts from a simplistic, generic view of the whole problem and then fills in the details and nuances. Incremental learning could enable imitation of such a learning process to learn more and potentially faster. This paper examines the possibility of learning additional latent variables from known latent variables for variational autoencoders. A method is proposed to facilitate learning additional information based on a modified β-TCVAE loss function that incorporates known general mutual information. A qualitative comparison is conducted on the dSprites dataset to evaluate the effect of this modification and the change of network structures on the learned latent, which hints at the structural tendency for β-TCVAE to learn new information from explicit known latent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call