Abstract

A new feature selection method based on Inductive probability is proposed in this paper. The main idea is to find the dependent attributes and remove the redundant ones among them. The technology to obtain the dependency needed is based on Inductive probability approach. The purpose of the proposed method is to reduce the computational complexity and increase the classification accuracy of the selected feature subsets. The dependence between two attributes is determined based on the probabilities of their joint values that contribute to positive and negative classification decisions. If there is an opposing set of attribute values that do not lead to opposing classification decisions (zero probability), the two attributes are considered independent, otherwise dependent. One of them can be removed and thus the number of attributes is reduced. A new attribute selection algorithm with Inductive probability is implemented and evaluated through extensive experiments, comparing with related attribute selection algorithms over eight datasets such as Molecular Biology, Connect4, Soybean, Zoo, Ballon, Mushroom, Lenses and Fictional from UCI Machine Learning Repository databases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.