Abstract

A neural network for function approximation is treated theoretically. The structure and the information processing of this network is similar to those of the forward-only counterpropagation network, while the learning rule is improved. That is, the learning rule of the hidden layer is self-organizing map instead of winner-take-all. For the case that a target function and the network are with one input and one output, parameters of the learning rule are lead theoretically to obtain the function approximation with the least square error, and the theoretical results are verified by computer simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call