Abstract

A hierarchical approach to decomposing a high-dimensional model into series of low-dimensional sub-models can be an effective way to overcome the `curse of dimensionality' problem. We investigate a hierarchy of linguistic decision trees (LDTs) for classification, and present linguistic interpretation of the hierarchy of LDTs. Due to the uncertain and non-linear relationship between input attributes and a goal, different hierarchies could have different performance for classification. We develop a GA with the linguistic ID3 in wrapper to optimize linguistic attribute hierarchy. The experimental results show that optimised linguistic attribute hierarchies perform better on the benchmark databases than a single LDT does, and they can greatly reduce the number of rules when the relationship between a goal variable and input attributes is highly uncertain and nonlinear. Comparing with well-known machine learning approaches, C4.5, Naive Bayes, and Neural Networks, the optimised linguistic attribute hierarchy achieves the highest accuracies for most tested databases. The trained hierarchy can be a real-time classifier if the optimization of hierarchies is performed offline.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call