Abstract

Restricted Boltzmann Machines (RBMs) are neural network models for unsupervised learning, but have recently found a wide range of applications as feature extractors for supervised learning algorithms. They have also received a lot of attention recently after being proposed as building blocks for deep belief networks. The success of these models raises the issue of how best to train them. At present, the most popular training algorithm for RBMs is the Contrastive Divergence (CD) learning algorithm. The aim of this paper is to seek for a new optimization algorithm tailored for training RBMs in the hope of obtaining a faster algorithm than the CD algorithm. We propose deriving a new training algorithm for RBMs based on an auxiliary function approach. Through an experiment on parameter training of an RBM, we confirmed that the present algorithm converged faster and to a better solution than the CD algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.