Abstract

As one of the essential deep learning models, a restricted Boltzmann machine (RBM) is a commonly used generative training model. By adaptively growing the size of the hidden units, infinite RBM (IRBM) is obtained, which possesses an excellent property of automatically choosing the hidden layer size depending on a specific task. An IRBM presents a competitive generative capability with the traditional RBM. First, a generative model called Gaussian IRBM (GIRBM) is proposed to deal with practical scenarios from the perspective of data discretization. Subsequently, a discriminative IRBM (DIRBM) and a discriminative GIRBM (DGIRBM) are established to solve classification problems by attaching extra-label units next to the input layer. They are motivated by the fact that a discriminative variant of an RBM can complete an individual framework for classification with better performance than some standard classifiers. Remarkably, the proposed models retain both generative and discriminative properties synchronously, that is, they can reconstruct data effectively and be established in considerable self-contained classifiers. The experimental results on image classification (both large and small), text identification, and facial recognition (both clean and noisy) reflect that a DIRBM and a DGIRBM are superior to some state-of-the-art RBM models in terms of the reconstruction error and the classification accuracy. Intuitively, they require models to avoid utilizing more hidden units than needed when confronted with various sizes of data, prioritizing smaller networks. In addition, the proposed models behave more robustly than other classic classifiers when dealing with noisy facial recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call