Abstract

Learning Classifier Systems (LCSs) are a group of rule-based evolutionary computation techniques, which have been frequently applied to data mining tasks. The LCSs’ rules are designed to be human-readable to enable the underlying knowledge to be investigated. However, the models for the majority of domains with high feature interaction contain a large number of rules that cooperatively represent the knowledge. However, the interaction between many rules is too complex to be comprehended by humans. Thus, it is hypothesized that translating the models’ underlying patterns into human-discernable visualizations will advance the understanding of the learned patterns and LCSs themselves. Interrogatable artificial Boolean domains with varying numbers of attributes are considered as benchmarks. Three new visualization techniques, termed as Feature Importance Map, Action-based Feature Importance Map, and Action-based Feature’s Average value Map, successfully produce interpretable results for all the complex domains tested. This includes both tracing the training progress and analyzing the trained models from LCSs. The visualization techniques’ ability to handle complex optimal solutions is observed for the 14-bits Majority-On problem, where the patterns from 6435 different cooperating rules were translated into human-discernable graphs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call