Abstract

The problem of recurrent neural network training is considered here as an approximate joint Bayesian estimation of the neuron outputs and unknown synaptic weights. We have implemented recursive estimators using nonlinear derivative free approximation of neural network dynamics. The computational efficiency and performances of proposed algorithms as training algorithms for different recurrent neural network architectures are compared on the problem of long term, chaotic time series prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call