Abstract

In order to improve classification accuracy and lower future computation and data collecting costs, feature selection is the process of choosing the most crucial features from a group of attributes and removing the less crucial or redundant ones. To narrow down the features that need to be analyzed, a variety of feature selection procedures have been detailed in published publications. Chi-Square (CS), IG, Relief, GR, Symmetrical Uncertainty (SU), and MI are six alternative feature selection methods used in this study. The provided dataset is aggregated using four rank aggregation strategies: "rank aggregation," "Borda Count (BC) methodology," "score and rank combination," and "unified feature scoring" based on the outcomes of the six feature selection method (UFS). These four procedures by themselves were unable to generate a clear selection rank for the characteristic. To produce different ranks of traits, this ensemble of aggregating ranks is carried out. For this, the bagging method of majority voting was applied.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.