Abstract

Traditional supervised learning classifier needs a lot of labeled samples to achieve good performance, however in many biological datasets there is only a small size of labeled samples and the remaining samples are unlabeled. Labeling these unlabeled samples manually is difficult or expensive. Technologies such as active learning and semi-supervised learning have been proposed to utilize the unlabeled samples for improving the model performance. However in active learning the model suffers from being short-sighted or biased and some manual workload is still needed. The semi-supervised learning methods are easy to be affected by the noisy samples. In this paper we propose a novel logistic regression model based on complementarity of active learning and semi-supervised learning, for utilizing the unlabeled samples with least cost to improve the disease classification accuracy. In addition to that, an update pseudo-labeled samples mechanism is designed to reduce the false pseudo-labeled samples. The experiment results show that this new model can achieve better performances compared the widely used semi-supervised learning and active learning methods in disease classification and gene selection.

Highlights

  • Identifying disease related genes and classifying the disease type using gene expression data is a very hot topic in machine learning

  • The experiments show our method can achieve a better accuracy than the active learning (AL) and supervised learning (SSL) logistic regression models

  • The novel logistic regression model is designed based on the complementarity of semi-supervised learning and active learning

Read more

Summary

Introduction

Identifying disease related genes and classifying the disease type using gene expression data is a very hot topic in machine learning Many different models such as logistic regression model[1] and support vector machines (SVM)[2] have been applied in this area. AL tries to train an accurate prediction model with minimum cost of labeling the unlabeled samples manually It selects most uncertain or informative unlabeled samples and annotates them by human experts. These labeled samples are included to the training dataset to improve the model performance. Though AL reduces the manpower work, manually labeling the selected samples by AL in biological experiments still cost much In another way, SSL uses unlabeled data together with labeled data in the training process without any manual labeling. The recent study[15] proposed by Lin designed a new active self-paced learning mechanism which combines the AL and SSL for face recognition

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call