Abstract

In recent years, a number of works reported the use of combination of multiple classifiers to produce a single classification and demonstrated significant performance improvement. The resulting classifier, referred to as an ensemble classifier, is a set of classifiers whose individual decisions are combined by weighted or unweighted voting to classify new examples. An ensembles are often more accurate than the individual classifiers that makes them up. In remote sensing Giacinto and Roli, 1997, Roli et al., 1997 report the use of ensemble of neural networks and the integration of classification results of different type of classifiers. Studies by growing an ensemble of decision trees and allowing them to vote for the most popular class reported a significant improvement in classification accuracy for land cover classification. This paper presents results obtained by random forests classifier, another technique of generating ensemble of classifiers and their performance is compared with the ensemble of decision tree classifiers. A classification accuracy of 88.32% is achieved by random forest classifier in comparison with 87.38% and 87.28% by decision tree ensemble created using boosting and bagging techniques. Further, study also suggests that bagging perform well in comparison with boosting in case of noise in training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call