Abstract

Sparse Representation Classifier (SRC) and its variants were considered as powerful classifiers in the domains of computer vision and pattern recognition. However, classifying test samples is computationally expensive due to the $$\ell _1$$ norm minimization problem that should be solved in order to get the sparse code. Therefore, these classifiers could not be the right choice for scenarios requiring fast classification. In order to overcome the expensive computational cost of SRC, a two-phase coding classifier based on classic Regularized Least Square was proposed. This classifier is more efficient than SRC. A significant limitation of this classifier is the fact that the number of the samples that should be handed over to the next coding phase should be specified a priori. This paper overcomes this main limitation and proposes five data-driven schemes allowing an automatic estimation of the optimal size of the local samples. These schemes handle the three cases that are encountered in any learning system: supervised, unsupervised, and semi-supervised. Experiments are conducted on five image datasets. These experiments show that the introduced learning schemes can improve the performance of the two-phase linear coding classifier adopting ad-hoc choices for the number of local samples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call