To complete the challenging task of high-dimensional data classification with limited labeled samples, we propose two semi-supervised learning models, namely the random subspace classifier ensemble model (SSRS) and its adaptive version (ASSRS). Considering the unique characteristics of high-dimensional data, SSRS selects subspaces of sample and feature dimensions and then reduces the dimensions of each subspace. To improve SSRS performance further, we designed a sample-labeling auxiliary algorithm, adaptive sample subspace algorithm, and adaptive weight voting rule for ASSRS to increase the proportion of labeled samples, obtain a suitable sample subspace for each feature subspace, and acquire a relative optimal weight for each base classifier. Experiments revealed that the performances of SSRS and ASSRS were better than those of other competitive algorithms and that the performance of ASSRS was stronger than that of SSRS. Additionally, we can accurately label samples in datasets where the proportion of labeled samples is relatively low by using SSRS and ASSRS. Because analysts are facing large numbers of high-dimensional datasets with limited labels, it is important to make accurate predictions based on a limited proportion of labeled data.
Read full abstract