Abstract

The newly-emerging sparse representation-based classifier (SRC) shows great potential for pattern classification but lacks theoretical justification. This paper gives an insight into SRC and seeks reasonable supports for its effectiveness. SRC uses L1-optimizer instead of L0-optimizer on account of computational convenience and efficiency. We re-examine the role of L1-optimizer and find that for pattern recognition tasks, L1-optimizer provides more classification meaningful information than L0-optimizer does. L0-optimizer can achieve sparsity only, whereas L1-optimizer can achieve closeness as well as sparsity. Sparsity determines a small number of nonzero representation coefficients, while closeness makes the nonzero representation coefficients concentrate on the training samples with the same class label as the given test sample. Thus, it is closeness that guarantees the effectiveness of the L1-optimizer based SRC. Based on the closeness prior, we further propose two kinds of class L1-optimizer classifiers (CL1C), the closeness rule based CL1C (C-CL1C) and its improved version: the Lasso rule based CL1C (L-CL1C). The proposed classifiers are evaluated on five databases and the experimental results demonstrate advantages of the proposed classifiers over SRC in classification performance and computational efficiency for large sample size problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call