Abstract

This paper considers on-line training of feedforward neural networks. Training examples are only available through sampling from a certain, possibly infinite, distribution. In order to make the learning process autonomous, one can employ Extended Kalman Filter or stochastic steepest descent with adaptively adjusted step-sizes. Here the latter is considered. A scheme of determining step-sizes is introduced that satisfies the following requirements: (i) it does not need any auxiliary problem-dependent parameters, (ii) it does not assume any particular loss function that the training process is intended to minimize, (iii) it makes the learning process stable and efficient. An experimental study with several approximation problems is presented. Within this study the presented approach is compared with Extended Kalman Filter and LFI, with satisfactory results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call