Abstract

Numerous studies have focused on feature selection using many algorithms, but most of these algorithms encounter problems when the amount of data is large. In this paper, we propose an algorithm that handles a large amount of data by partitioning the data to process a reduction, and then selecting the intersection of all reducts as a stable reduct. This algorithm is successful but may suffer from loss of information if the samples are unsuitable. The proposed algorithm is based on discernibility matrix and function. Furthermore, the method can address the case in which the data consist of a significant amount of information. Our results show that the proposed algorithm is powerful and flexible enough to successfully target a range of different domains and can effectively reduce computational complexity as well as increase reduction efficiency. The efficiency of Proposed Algorithm is illustrated by experiments with UCI datasets further.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call