Abstract

This paper focuses on a generalization of classifiers in noisy problems and aims at exploring learning classifier systems (LCSs) that can evolve accurately generalized classifiers as an optimal solution in several environments which include different type of noise. For this purpose, this paper employs XCS-CRE (XCS without Convergence of Reward Estimation) which can correctly identify classifiers as either accurate or inaccurate ones even in a noisy problem, and investigates its effectiveness in several noisy problems. Through intensive experiments of three LCSs (i.e., XCS as the conventional LCS, XCS-SAC (XCS with Self-adaptive Accuracy Criterion) as our previous LCS, and XCS-CRE) on the noisy 11-multiplexer problem where reward value changes according to (a) Gaussian distribution, (b) Cauchy distribution, or (c) Lognormal distribution, the following implications have been revealed: (1) the correct rate of the classifier of XCS-CRE and XCS-SAC converge to 100% in all three types of the reward distribution while that of XCS cannot reach 100%; (2) the population size of XCS-CRE is smallest followed by that of XCS-SAC and XCS; and (3) the percentage of the acquired optimal classifiers of XCS-CRE is highest followed by that of XCS-SAC and XCS.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.