Abstract

Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminative modelling performance of a deep generative model. To demonstrate the usage of these learning constraints, we introduce a novel deep generative model: encapsulated variational autoencoders (EVAEs) to stack two different variational autoencoders together with their learning algorithm. Using the MNIST digits dataset as a demonstration, the generative modelling performance of EVAEs was improved with the imposed dependence-minimisation constraint, encouraging our derived deep generative model to produce various patterns of MNIST-like digits. Using CIFAR-10(4K) as an example, a semisupervised EVAE with an imposed regularisation learning constraint was able to achieve competitive discriminative performance on the classification benchmark, even in the face of state-of-the-art semisupervised learning approaches.

Highlights

  • To implement learning constraints on an encapsulated variational autoencoders (EVAEs), we introduce a key tunable hyperparameter α to tune the relation between two component variational autoencoders (VAEs) within an EVAE

  • The dependence-minimisation constraint, which enforces this derived generative model to encode various latent representations, can be implemented by tuning this α to a large value, whereas a small α can be viewed as the implementation of the regularisation learning constraint to elevate the discriminative modelling performance of a semisupervised EVAE

  • The regularisation constraint targets learning stable, interpretable representations by harnessing the external supervision signal to resolve ambiguous visual input in acquiring clear-cut visual concepts for all sorts of discriminative tasks [11,26]. To see how this learning constraint improves the discriminative modelling performance of our derived EVAE, we started by specifying a semisupervised EVAE

Read more

Summary

Deep Generative Models

Representation learning, a learning process that aims to extract representative latent representations of observed data, is one of the most active research areas in artificial intelligence [1,2]. Utilising deep neural networks for parameter estimation, deep generative models grant further flexibility in learning generative representations that fuels the optimisation of data-reconstruction fidelity [3]. To mutual information [7] These refinements allow a deep generative model to learn disentangled latent representations of data to grant further flexibility in data generation. While these learned latent representations have fuelled many exciting downstream applications, deep generative models generally perform poorly on discriminative tasks, even with sufficient added supervised signals [8,9]. A simple remedy is to train deep generative models with semisupervised training datasets This leads to the coinage of deep conditional generative models. While some generative approaches have previously been investigated, a more efficient approach to improve both the generative- and discriminative-modelling performance of a deep generative model is still lacking

Approach Overview
Related Works
Brief VAE Review
Parameterisations on EVAE
Joint Encoder and Decoder
Learning Objective
Learning Constraints
Dependence-Minimisation Constraint
Regularisation Constraint
EVAE Parameter Estimation
Empirical Experiments
Experiment Setup
Generative Task
Discriminative Task
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call