Abstract

Feature subset selection is an effective way for reducing dimensionality, removing irrelevant data, and improving result accuracy. Feature subset selection can be viewed as the process of identifying and removing as many irrelevant and redundant features as possible. This is because 1) irrelevant features do not contribute to the predictive accuracy and 2) redundant features do not redound to getting a better predictor for that they provide mostly information which is already present in other feature(s). Irrelevant features, along with redundant features, severely affect the accuracy of the learning machines. In this paper, exceptional vigilance is made on characteristic assortment for classification with data. Here an algorithm is utilized that plans attributes founded on their significance. Then, the organized attributes can be utilized as input one easy algorithm for building decision tree (Oblivious Tree). Outcomes show that this decision tree uses boasted chosen by suggested algorithm outperformed conclusion tree without feature selection. From the experimental outcomes, it is observed that, this procedure develops lesser tree having an agreeable accuracy. The results obtained with decision tree method for selection of datasets has resulted with 85.87% when compared with other techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call