Abstract

Deep belief neural network represents many-layered perceptron and permits to overcome some limitations of conventional multilayer perceptron due to deep architecture. The supervised training algorithm is not effective for deep belief neural network and therefore in many studies was proposed new learning procedure for deep neural networks. It consists of two stages. The first one is unsupervised learning using layer by layer approach, which is intended for initialization of parameters (pre-training of deep belief neural network). The second is supervised training in order to provide fine tuning of whole neural network. In this work we propose the training approach for restricted Boltzmann machine, which is based on minimization of reconstruction square error. The main contribution of this paper is new interpretation of training rules for restricted Boltzmann machine. It is shown that traditional approach for restricted Boltzmann machine training is particular case of proposed technique. We demonstrate the efficiency of proposed approach using deep nonlinear auto-encoder.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call