Abstract

In Machine learning and pattern recognition, building a better predictive model is one of the key problems in the presence of big or massive data; especially, if that data contains noisy and unrepresentative data samples. These types of samples adversely affect the learning model and may degrade its performance. To alleviate this problem, sometimes, it becomes necessary to sample the data after eliminating unnecessary instances by maintaining the underlying distribution intact. This process is called sampling or instance selection (IS). However, in this process, a substantial computational cost is involved. This paper discusses an uncertainty based optimal sample selection (UBOSS) method which can select a subset of optimal samples efficiently. Our proposed work comprises three main steps; initially, it uses an IS method to identify the patterns of representative and unrepresentative samples from the original data set; then, an uncertainty-based selector is designed to obtain fuzziness (i.e., a type of uncertainty) of those samples using a classifier whose output is a membership or fuzzy vector; this process further utilizes the divide-and-conquer strategy to obtain a subset of representative samples. Experiments are conducted on six datasets to evaluate the performance of the proposed IS method. Results show that our proposed methodology outperforms when compared with the selection performance (i.e., optimum samples) of the baseline methods (i.e., CNN, IB3, and DROP3).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call