Abstract

Function approximation is an instance of supervised learning which is one of the most studied topics in machine learning, artificial neural networks, pattern recognition, and statistical curve fitting. In principle, any of the methods studied in these fields can be used in reinforcement learning. Multi-layered feed-forward neural networks (MLFNN) have been extensively used for the purpose of function approximation. Another class of neural networks, BAM, has also been studied and experimented for pattern mapping problems and many variations have been reported in literature. In the present study the application of back propagation algorithm to MLFNN has been proposed in such a way that feed-forward architecture behaves like BAM. Various architectures consisting of fourlayers have been explored in quest of finding the optimal architecture for the example function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call