Abstract

Setting up a neural network with a learning algorithm that determines how it can best operate is an efficient way to formulate control systems for many engineering applications, and is often much more feasible than direct programming. This paper examines three important aspects of this approach: the details of the cost function that is used with the gradient descent learning algorithm, how the resulting system depends on the initial pre-learning connection weights, and how the resulting system depends on the pattern of learning rates chosen for the different components of the system. We explore these issues by explicit simulations of a toy model that is a simplified abstraction of part of the human oculomotor control system. This allows us to compare our system with that produced by human evolution and development. We can then go on to consider how we might improve on the human system and apply what we have learnt to control systems that have no human analogue.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call