Abstract

Deep belief network (DBN) has become one of the most important models in deep learning, however, the un-optimized structure leads to wasting too much training resources. To solve this problem and to investigate the connection of depth and accuracy of DBN, an optimization training method that consists of two steps is proposed. Firstly, by using mathematical and biological tools, the significance of supervised training is analyzed, and a theorem, that is on reconstruction error and network energy, is proved. Secondly, based on conclusions of step one, this paper proposes to optimize the structure of DBN (especially hidden layer numbers). Thirdly, this method is applied in two image recognition experiments, and results show increased computing efficiency and accuracies in both tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call