Abstract

Feature-based ensemble learning, where weak hypotheses are learned within the associated feature subspaces constructed by repeated random feature selection, is described. The proposed ensemble approach is less affected by noisy features or outliers unique to the training set than the bagging and boosting algorithms due to the randomized selection of feature subsets from the entire training set. The individual weak hypotheses perform their own generalization processes, within the associated feature subspaces, independently of each other. This allows the proposed ensemble to provide improved performance on unseen data over other ensemble learning methods that randomly choose subsets of training samples in an input space. The weak hypotheses are combined through three different aggregating strategies: majority voting, weighted average and neural network-based aggregation. The proposed ensemble technique has been applied to hyperspectral chemical plume data and a performance comparison of the proposed and other existing ensemble methods is presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.