Abstract

Sparse representation has been successfully applied to pattern recognition problems in recent years. The most common way for producing sparse coding is to use the l1-norm regularization. However, the l1-norm regularization only favors sparsity and does not consider locality. It may select quite different bases for similar samples to favor sparsity, which is disadvantageous to classification. Besides, solving the l1-minimization problem is time consuming, which limits its applications in large-scale problems. We propose an improved algorithm for sparse coding and dictionary learning. This algorithm takes both sparsity and locality into consideration. It selects part of the dictionary columns that are close to the input sample for coding and imposes locality constraint on these selected dictionary columns to obtain discriminative coding for classification. Because an analytic solution of the coding is derived by only using part of the dictionary columns, the proposed algorithm is much faster than the l1-based algorithms for classification. Besides, we also derive an analytic solution for updating the dictionary in the training process. Experiments conducted on five face databases show that the proposed algorithm has better performance than the competing algorithms in terms of accuracy and efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call