Abstract

To provide more external knowledge for training self-supervised learning (SSL) algorithms, this paper proposes a maximum mean discrepancy-based SSL (MMD-SSL) algorithm, which trains a well-performing classifier by iteratively refining the classifier using highly confident unlabeled samples. The MMD-SSL algorithm performs three main steps. First, a multilayer perceptron (MLP) is trained based on the labeled samples and is then used to assign labels to unlabeled samples. Second, the unlabeled samples are divided into multiple groups with the k-means clustering algorithm. Third, the maximum mean discrepancy (MMD) criterion is used to measure the distribution consistency between k-means-clustered samples and MLP-classified samples. The samples having a consistent distribution are labeled as highly confident samples and used to retrain the MLP. The MMD-SSL algorithm performs an iterative training until all unlabeled samples are consistently labeled. We conducted extensive experiments on 29 benchmark data sets to validate the rationality and effectiveness of the MMD-SSL algorithm. Experimental results show that the generalization capability of the MLP algorithm can gradually improve with the increase of labeled samples and the statistical analysis demonstrates that the MMD-SSL algorithm can provide better testing accuracy and kappa values than 10 other self-training and co-training SSL algorithms.

Highlights

  • Semi-supervised learning (SSL) is an important branch of data mining and machine learning [1], which uses a large number of unlabeled samples to improve the generalization capability of classifiers trained on a small number of labeled samples

  • The mean discrepancy-based semi-supervised learning (MMD-SSL) algorithm was implemented using the Python programming language and other SSL algorithms were downloaded from sci2s (https://sci2s.ugr.es/SelfLabeled), which is a soft computing and intelligent information system developed by the University of Granada research group

  • The first experiment was done to evaluate the suitability of using the multilayer perceptron (MLP) classification algorithm and the k-means clustering algorithm in the maximum mean discrepancy (MMD)-SSL algorithm

Read more

Summary

Introduction

Semi-supervised learning (SSL) is an important branch of data mining and machine learning [1], which uses a large number of unlabeled samples to improve the generalization capability of classifiers trained on a small number of labeled samples. Designed an extended co-training semi-supervised learning algorithm named TriTraining, which generates three classifiers from the original labeled samples and refines them using the unlabeled samples in the tri-training process. For co-training SSL, the assumption that multiple views are conditionally independent always results in a high computational complexity To address these issues, this paper presents a novel SSL algorithm, named maximum mean discrepancy-based semi-supervised learning (MMD-SSL) that performs three main steps.

Preliminaries
SSL with Self-Training Paradigm
SSL with Co-Training Paradigm
The Proposed MMD-SSL Algorithm
Experimental Results and Analysis
Rationality Validation
Effectiveness Validation
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call