Abstract

The nearest subspace classifier (NSC) assumes that the samples of every class lie on a separate subspace and it is possible to classify a test sample by computing the distance between the test sample and the subspaces. The sparse representation based classification (SRC) generalizes the NSC - it assumes that the samples of any class can lie on a union of subspaces. By calculating the distance between the test sample and these subspaces, one can classify the test sample. Both NSC and SRC hinge on the assumption that the distance between the test sample and correct subspace will be small and approximately Normally distributed. Based on this assumption, these studies proposed using an l 2 -norm measure. It is well known that l 2 -norm is sensitive to outliers (large deviations at few locations). In order to make the NSC and SRC robust and improve their performance we propose to employ the l 1 -norm based distance measure. Experiments on benchmark classification problems, face recognition and character recognition show that the proposed method indeed improves upon the basic versions of NSC and SRC; in fact our proposed robust NSC and robust SRC yield even better results than support vector machine and neural network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call