Abstract

Decision Tree Classification is a simple and important mining function. Decision Tree algorithms are computationally intensive, yet do not capture the evolutionary trends from incremental data repository. In conventional mining approaches, if two or more datasets are to be merged to get a single target dataset, the entire computation for constructing a classifier has to be carried out all over again. Previous work in this field has been to construct individual decision tree classifiers and merge them by taking a voted arbitration or by merging the corresponding decision rules. We have attempted a new approach by data pre-processing the individual windows of the growing database and we call them as Knowledge Concentrates(KC). The formation of the KCs is done in the offline mode. In the mining operations, we use the KCs instead of using the entire past data, thereby reducing the space and time complexity of the entire mining process. The user dynamically selects the target dataset by identifying the windows of interest. The mining requirement is satisfied by merging the respective KCs and running the decision tree algorithm on the merged KC. The proposed scheme operates in three phases. The first phase is the planning phase wherein the dataset domain information is gathered and the data mining goals are defined. The second phase makes a single scan on a window in the database and generates a summary of the window as a knowledge concentrate KC. In our work we have used an efficient Trie structure to store the KCs. The third phase merges the desired windows(KCs) and applies the classification algorithm on the aggregate of the KCs to give the final required classifier. The salient issues addressed in this work are to form a condensed form of the database which enables in the extraction of the patterns in the database that are input to a decision making algorithm to form the required decision tree. The entire scheme is decision tree algorithm independent, in the sense that a user has flexibility to use any standard decision tree algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call