Abstract

This paper introduces an approach to reinforcement learning by cooperating agents using a near set-based variation of the Peters-Henry-Lockery rough set-based actor critic adaptive learning method. Near sets were introduced by James Peters in 2006 and formally defined in 2007. Near sets result from a generalization of rough set theory. One set X is near another set Y to the extent that the description of at least one of the objects in X matches the description of at least one of the objects in Y. The hallmark of near set theory is object description and the classification of objects by means of features. Rough sets were introduced by Zdzisław Pawlak during the early 1980s and provide a basis for perception of objects viewed on the level of classes rather than the level of individual objects. A fundamental basis for near set as well as rough set theory is the approximation of one set by another set considered in the context of approximation spaces. It was observed by Ewa Orłowska in 1982 that approximation spaces serve as a formal counterpart of perception, or observation. This article extends earlier work on an ethology-based Peters-Henry-Lockery actor critic method that is episodic and is defined in the context of an approximation space. The contribution of this article is a framework for actor-critic learning defined in the context of near sets. This paper also reports the results of experiments with three different forms of the actor critic method.KeywordsAdaptive learningapproximation spaceethogramethologyactor criticnear setsrough sets

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call