Abstract

A review of the basic methods used to model a learning agent, such as instance-based learning, artificial neural networks and reinforcement learning, suggests that they either lack flexibility (can only be used to solve a small number of problems) or they tend to converge very slowly to the optimal policy. This paper describes and illustrates a set of processes that address these two shortcomings. The resulting learning agent is able to adapt fairly well to a much larger set of environments and is capable of doing this in a reasonable amount of time. In order to address the lack of flexibility and slow convergence to the optimal policy, the new learning agent becomes a hybrid between a learning agent based on instance-based learning and one based on reinforcement learning. To accelerate its convergence to its optimal policy, this new learning agent incorporates the use of a new concept we call propagation of good findings. Furthermore, to make a better use of the learning agent's memory resources,, and therefore increase its flexibility, we make use of another new concept we call moving prototypes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call