Abstract

AbstractAnalysis dictionary learning (ADL) model has attracted much interest from researchers in representation-based classification due to its scalability and efficiency in out-of-sample classification. However, the discrimination of the analysis representation is not fully explored when roughly consider the supervised information with redundant and noisy samples. In this paper, we propose a discriminative and robust analysis dictionary learning model (DR-ADL), which explores the underlying structural information of data samples. Firstly, the supervised latent structural term is first implicitly considered to generate a roughly block-diagonal representation for intra-class samples. However, this discriminative structure is fragile and weak in the presence of noisy and redundant samples. Concentrating on both intra-class and inter-class information, we then explicitly incorporate an off-block suppressing term on the ADL model for discriminative structure representation. Moreover, non-negative constraint is incorporated on representations to ensure a reasoning explanation for the contributions of each atoms. Finally, the DR-ADL model is alternatively solved by the K-SVD method, iterative re-weighted method and gradient method efficiently. Experimental results on four benchmark face datasets classification validate the performance superiority of our DR-ADL model.KeywordsAnalysis dictionary learningOff-block suppressionK-SVD methodLatent space classification

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call