Abstract

Learning Classifier Systems (LCSs) excel in data mining tasks, e.g. an LCS optimal model contains patterns that can precisely reveal how features identify classes for the explored problem. EC is stochastic, which leads to LCSs producing and keeping redundant rules that obscure the patterns. Thus, LCSs employ rule compaction methods to improve models by removing problematic rules. However, former compaction methods often fail to highlight the patterns and reduce accuracy after compaction. A survey of compaction methods is provided to investigate why compaction algorithms are incompetent in achieving expected performance. This work discovers a new LCS's optimal solution format, which refers to a model consisting of all correct unsubsumable rules in a given search space. This optimal solution is accurate in arbitrary noiseless datasets and contains interpretable patterns naturally. Thus, two compaction methods, specifying the production of such optimal solutions, are proposed. Successful compaction is demonstrated on complex and real problems with noiseless datasets, e.g. the 11-bits Majority-On problem that requires 924 different interacting rules in the optimum solution to be uniquely identified to enable correct visualization of the discovered knowledge. For the first time, the patterns contained in learned models for the large-scale 70-bits Multiplexer problem are visualized successfully.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call