Abstract
Stacking Restricted Boltzmann Machines (RBM) to create deep networks, such as Deep Belief Networks (DBN) and Deep Boltzmann Machines (DBM), has become one of the most important research fields in deep learning. DBM and DBN provide state-of-the-art results in many fields such as image recognition, but they don't show better learning abilities than RBM when dealing with data containing irrelevant patterns. Point-wise Gated Restricted Boltzmann Machines (pgRBM) can effectively find the task-relevant patterns from data containing irrelevant patterns and thus achieve satisfied classification results. For the limitations of the DBN and the DBM in the processing of data containing irrelevant patterns, we introduce the pgRBM into the DBN and the DBM and present Point-wise Gated Deep Belief Networks (pgDBN) and Point-wise Gated Deep Boltzmann Machines (pgDBM). The pgDBN and the pgDBM both utilize the pgRBM instead of the RBM to pre-train the weights connecting the networks' the visible layer and the hidden layer, and apply the pgRBM learning task-relevant data subset for traditional networks. Then, this paper discusses the validity that dropout and weight uncertainty methods are developed to prevent overfitting in pgRBMs, pgDBNs, and pgDBMs networks. Experimental results on MNIST variation datasets show that the pgDBN and the pgDBM are effective deep neural networks learning
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.