Abstract

Associative classification is a promising technique to build accurate classifiers. However, in large or correlated data sets, association rule mining may yield huge rule sets. Hence, several pruning techniques have been proposed to select a small subset of high-quality rules. Since the availability of a rich rule set may improve the accuracy of the classifier, we argue that rule pruning should be reduced to a minimum. The L3 associative classifier is built by means of a lazy pruning technique that discards exclusively rules that only misclassify training data. The classification of unlabeled data is performed in two steps. A small subset of high-quality rules is first considered. When this set is not able to classify the data, a larger rule set is exploited. This second set includes rules usually discarded by previous approaches. To cope with the need of mining large rule sets and to efficiently use them for classification, a compact form is proposed to represent a complete rule set in a space-efficient way and without information loss. An extensive experimental evaluation on real and synthetic data sets shows that L:i improves the classification accuracy with respect to previous approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call