Abstract

The high capacity of deep neural networks, developed for complex data, evokes its proneness to overfitting. A lot of attention is paid on finding flexible solutions to this problem. To achieve flexibility, as a very challenging issue in improving the ability of generalization, deep networks have to deal with the stochastic effects of regularization. In this paper we propose a methodological framework for dealing with the stochasticity in regularized deep neural network. Basics of dropout as ensemble method for regularization are presented, followed by introducing new method for dropout regularization and its application in molecular dynamics simulations. Results from the simulation show that, the stochastic behavior cannot be avoided but we have to find way to deal with it. The proposed dropout method improves the state-of-the-art of applied deep neural networks on the benchmark dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call