Abstract

Efficient learning of data distribution is necessary for many different applications such as classification, recognition, decision making and segmentation. Generative probabilistic models have been extensively used to effectively learn distribution of the input data. Moreover, generative models are considered as the first and foremost building blocks for developing highly expressive deep neural network architectures. Throughout the past decade, researchers have developed several undirected and directed probabilistic generative models. The two most popular generative models are Restricted Boltzman Machine (RBM) and Sigmoid Belief Network (SBN). Both these models utilize a two layer architecture with feedforward information processing inspired by the biological systems for learning data distributions. However, none of these models exhibit another well-known property found in biology in the form of recurrent neuronal information processing, which maybebeneficial for learning more complex data distribution. Consequently, this paper, for the first time in literature, proposes a directed recurrent generative model known as Simultaneous Recurrent Belief Network (SRBN) for efficiently learning the distribution of the input data. The efficacy of the proposed SRBN model is evaluated using two benchmark datasets: MNIST and Caltech 101 Silhouettes. Our experimental results suggest the SRBN model shows improved data distribution learning performance while utilizing minimal trainable parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call