Abstract

Representation learning plays an important role for building effective deep neural network models. Deep generative probabilistic models have shown to be efficient in the data representation learning task which is usually carried out in an unsupervised fashion. Throughout the past decade, there has been almost exclusive focus on the learning algorithms to improve representation capability of the generative models. However, effective data representation requires improvement in both learning algorithm and architecture of the generative models. Therefore, improvement to the neural architecture is critical for improved data representation capability of deep generative models. Furthermore, the prevailing class of deep generative models such as deep belief network (DBN), deep Boltzman machine (DBM) and deep sigmoid belief network (DSBN) are inherently unidirectional and lack recurrent connections ubiquitous in the biological neuronal structures. Introduction of recurrent connections may offer further improvement in data representation learning performance to the deep generative models. Consequently, for the first time in literature, this work proposes a deep recurrent generative model known as deep simultaneous recurrent belief network (D-SRBN) to efficiently learn representations from unlabeled data. Experimentation on four benchmark datasets: MNIST, Caltech 101 Silhouettes, OCR letters and Omniglot show that the proposed D-SRBN model achieves superior representation learning performance while utilizing less computing resources when compared to the four state-of-the-art generative models such as deep belief network (DBN), DBM, DSBN and VAE (variational auto-encoder).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call