Abstract

The sparse representation based classification (SRC) method attracts much attention in recent years, due to its promising result and robustness for face recognition. Different from the previous improved versions of SRC which emphasize more on sparsity, we focus on the decision rule of SRC. SRC predicts the label of a given test sample based on the residual which measures the representational capability of the training data of each class. Such decision rule is the same as the nearest feature classifiers (NFCs), but not optimal for SRC which is based on the mechanism of sparsity. In this paper, we first review the NFCs, and rewrite them in a unified formulation. We found that the objective of NFCs is different from SRC but they use the same decision rule. In order to capture more discriminative information from the sparse coding coefficient, we propose a new decision rule, sum of coefficient (SoC), which matches well with SRC. SoC is based on the fact that the sparse coefficient reflects the similarities between data, which are able to take full advantage of sparsity for classification. SoC can be regarded as the voting decision rule which is widely used in ensemble learning, i.e. Adaboost, Bagging. We compare our method with the original SRC on three representative face databases and show that SoC is much more discriminative and accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call