Abstract

Reinforcement Learning (RL) methods became popular decades ago and still maintain to be one of the mainstream topics in computational intelligence. Countless different RL methods and variants can be found in the literature, each one having its own advantages and disadvantages in a specific application domain. Representation of the revealed knowledge can be realized in several ways depending on the exact RL method, including e.g. simple discrete Q-tables, fuzzy rule-bases, artificial neural networks. Introducing interpolation within the knowledge-base allows the omission of less important, redundant information, while still keeping the system functional. A Fuzzy Rule Interpolation-based (FRI) RL method called FRIQ-learning is a method which possesses this feature. By omitting the unimportant, dependent fuzzy rules — emphasizing the cardinal entries of the knowledge representation — FRIQ-learning is also suitable for knowledge extraction. In this paper the fundamental concepts of FRIQ-learning and associated extensions of the method along with benchmarks will be discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call