Abstract

A control surface can be learned and represented by a neural network through the adoption of a reinforcement learning scheme. The authors use a neural network to learn a mapping between a dynamic system's state space and the space of possible control actions. The system state space is incrementally defined, and an appropriate control action is assigned to each part of the state space from a binary vector input. One problem of this type of learning control is the learning of the state space partitioning itself, i.e., whether the system can automatically partition the state space into a number of control situations. If so, the learning can be achieved faster and in an optimal way. The unsupervised learning algorithm for adaptive state space partitioning is based both on BOXES, and on G.A. Carpenter and S. Grossberg's (1988) ART network. The learning algorithm performed adequately in a series of performance trials, using the humanly partitioned BOXES learning algorithm as the performance measure. >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.