Abstract

Minimum squared error based classification (MSEC) method establishes a unique classification model for all the test samples. However, this classification model may be not optimal for each test sample. This paper proposes an improved MSEC (IMSEC) method, which is tailored for each test sample. The proposed method first roughly identifies the possible classes of the test sample, and then establishes a minimum squared error (MSE) model based on the training samples from these possible classes of the test sample. We apply our method to face recognition. The experimental results on several datasets show that IMSEC outperforms MSEC and the other state-of-the-art methods in terms of accuracy.

Highlights

  • The minimum squared error based classification (MSEC) is sound in theory and is able to achieve a high accuracy [1,2]

  • The MSEC has been applied to a number of problems such as imbalanced classification [7], palm-print verification [9], low-rank representation [10,11], super-resolution learning [12], image restoration [13], and manifold learning [14]

  • The main difference between representation based classification (RC) and MSEC is that RC tries to use the weighted sum of all the training samples to represent the test sample, whereas MSEC aims to map the training samples to their class labels

Read more

Summary

Introduction

The minimum squared error based classification (MSEC) is sound in theory and is able to achieve a high accuracy [1,2]. MSEC tries to minimize the mean square error between the predicted class labels and the true class labels of the training samples. Since the test sample and the training samples that are ‘‘close’’ to the test sample have the similar MSE models, it can be expected that IMSEC performs better in mapping the test sample to the correct class label than CMSE. Step 2 of the proposed method assigns the test sample into one of the possible classes. The proposed method tries to establish a model to map the training samples to their true class labels, whereas CRC uses the weighted combination of all the training samples to represent the test sample, and LRC uses the class-specific training samples to represent the test sample. When classifying a test sample, the proposed method and LRC need to solve one and C MSE models, respectively, where C is the number of the classes. The proposed method is more efficient than LRC

Experiments
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call