Abstract

As a supervised learning method, classical rough set theory often requires a large amount of labeled data, in which concept approximation and attribute reduction are two key issues. With the advent of the age of big data however, labeling data is an expensive and laborious task and sometimes even infeasible, while unlabeled data are cheap and easy to collect. Hence, techniques for rough data analysis in big data using a semi-supervised approach, with limited labeled data, are desirable. Although many concept approximation and attribute reduction algorithms have been proposed in the classical rough set theory, quite often, these methods are unable to work well in the context of limited labeled big data. The challenges to classical rough set theory can be summarized with three issues: limited labeled property of big data, computational inefficiency and over-fitting in attribute reduction. To address these three challenges, we introduce a theoretic framework called local rough set, and develop a series of corresponding concept approximation and attribute reduction algorithms with linear time complexity, which can efficiently and effectively work in limited labeled big data. Theoretical analysis and experimental results show that each of the algorithms in the local rough set significantly outperforms its original counterpart in classical rough set theory. It is worth noting that the performances of the algorithms in the local rough set become more significant when dealing with larger data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call