Abstract

A prior distribution of weights for Multilayer feedforward neural network in Bayesian point of view plays a central role toward generalization. In this context, we propose a new prior law for weights parameters which motivate the network regularization more than $$l_ {1}$$ and $$l_ {2}$$ early proposed. To train the network, we have based on Hamiltonian Monte Carlo, it is used to simulate the prior and the posterior distribution. The generated samples are used to approximate the gradient of the evidence which allows to re-estimate the hyperparameters that balance a trade off between the likelihood term and regularized term, on the other hand we use the obtained posterior samples to estimate the network output. The case problem studied in this paper includes a regression and classification tasks. The obtained results illustrate the advantages of our approach in term of error rate compared to old approach, unfortunately our method consomme time before convergence.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call