Abstract
Dropout is one of the most widely used methods to avoid overfitting neural networks. However, it rigidly and randomly activates neurons according to a fixed probability, which is not consistent with the activation mode of neurons in the human cerebral cortex. Inspired by gene theory and the activation mechanism of brain neurons, we propose a more intelligent adaptive dropout, in which a variational self-encoder (VAE) overlaps to an existing neural network to regularize its hidden neurons by adaptively setting activities to zero. Through alternating iterative training, the discarding probability of each hidden neuron can be learned according to the weights and thus effectively avoid the shortcomings of the standard dropout method. The experimental results in multiple data sets illustrate that this method can better suppress overfitting in various neural networks than can the standard dropout. Additionally, this adaptive dropout technique can reduce the number of neurons and improve training efficiency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.