Abstract

Discriminative restricted Boltzmann machine (DRBM) is a probabilistic three-layered neural network, consisting of the input, hidden, and output layers, that helps to solve classification problems. This study attempts to improve the generalization property of the DRBM. Regularization methods such as L1 or L2 regularizations can be used to control the representation power of a learning model and suppress over-fitting to a dataset. To control the representation power of the DRBM, an alternative regularization approach is proposed, in which sparse regularization is imposed on the values of the hidden variables of the DRBM. In the resultant model, the sparse regularization controls the effective size of the hidden layer of the DRBM. Unlike standard regularization methods, in the proposed model, parameters that control the sparsity strength are trainable. The method is validated through numerical experiments based on benchmark datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call