Abstract

The restricted Boltzmann machine (RBM) is one of the widely used basic models in the field of deep learning. Although many indexes are available for evaluating the advantages of RBM training algorithms, the classification accuracy is the most convincing index that can most effectively reflect its advantages. RBM training algorithms are sampling algorithms essentially based on Gibbs sampling. Studies focused on algorithmic improvements have mainly faced challenges in improving the classification accuracy of the RBM training algorithms. To address the above problem, in this paper, we propose a fast Gibbs sampling (FGS) algorithm to learn the RBM by adding accelerated weights and adjustment coefficient. An important link based on Gibbs sampling theory was established between the update of the network weights and mixing rate of Gibbs sampling chain. The proposed FGS method was used to accelerate the mixing rate of Gibbs sampling chain by adding accelerated weights and adjustment coefficients. To further validate the FGS method, numerous experiments were performed to facilitate comparisons with the classical RBM algorithm. The experiments involved learning the RBM based on standard data. The results showed that the proposed FGS method outperformed the CD, PCD, PT5, PT10, and DGS algorithms, particularly with respect to the handwriting database. The findings of our study suggest the potential applications of FGS to real-world problems and demonstrate that the proposed method can build an improved RBM for classification.

Highlights

  • Deep learning (DL) is an important branch of machine learning that employs complex model structures or different nonlinear transformation methods to conduct high-dimensional abstract feature modeling of data [1]

  • Deep belief network (DBN) are stacked with restricted Boltzmann machines (RBMs), and the whole network can be trained by a greedy layer-wise learning algorithm from the bottom to the top [5]. erefore, the effect of RBM training will directly affect the quality of the DBN

  • Based on the above theoretical analysis, it can be seen that the core factor of the RBM training algorithm based on Gibbs sampling is the convergence property of the Gibbs sampling chain, i.e., the sample mixing rate. e change in the network weights is an important factor affecting the mixing rate of Gibbs chain sampling

Read more

Summary

Introduction

Deep learning (DL) is an important branch of machine learning that employs complex model structures or different nonlinear transformation methods to conduct high-dimensional abstract feature modeling of data [1]. DBNs are stacked with restricted Boltzmann machines (RBMs), and the whole network can be trained by a greedy layer-wise learning algorithm from the bottom to the top [5]. RBM is an important model employed in DBNs and is one of the widely used models of the Markov random field (MRF). It was proposed by Smolensky in 1986 based on the Boltzmann machine (BM) [6]. RBMs are applied in dimension reduction [7], classification [8], collaborative filtering [9], feature learning [10], topic modeling [11], radar target automatic recognition [12], chip synthesis [13], and speech recognition [14]. It uses K step sampling chain to approximate the target gradient. e CD algorithm initializes the Gibbs chain

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call