Abstract
In order to apply deep learning to the field of image recognition, the basic principle, training process and model structure of deep belief networks (DBNs) in deep learning are analysed. For small samples, samples are down-sampled at the pretraining stage. In the parameter fine-tuning stage, random dropout is introduced, and the hidden layer nodes are cleared out randomly to keep the weights unchanged. The results show that the layered training mechanism of DBNs greatly reduces the difficulty of training and the training time. In the small sample, after introducing the down-sampling and random dropout, the deep belief network has a good improvement in recognition rate and time consumption, and the over fitting phenomenon is effectively alleviated.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.