Abstract

Dictionary learning has been an important role in the success of data representation. As a complete view of data representation, hybrid dictionary learning (HDL) is still in its infant stage. In previous HDL approaches, the scheme of how to learn an effective hybrid dictionary for image classification has not been well addressed. In this paper, we proposed a locality preserving and label-aware constraint-based hybrid dictionary learning (LPLC-HDL) method, and apply it in image classification effectively. More specifically, the locality information of the data is preserved by using a graph Laplacian matrix based on the shared dictionary for learning the commonality representation, and a label-aware constraint with group regularization is imposed on the coding coefficients corresponding to the class-specific dictionary for learning the particularity representation. Moreover, all the introduced constraints in the proposed LPLC-HDL method are based on the l2-norm regularization, which can be solved efficiently via employing an alternative optimization strategy. The extensive experiments on the benchmark image datasets demonstrate that our method is an improvement over previous competing methods on both the hand-crafted and deep features.

Highlights

  • Due to the insufficiency of data representation, Dictionary Learning (DL) has aroused considerable interest in the past decade and achieved much success in the various applications, such as image denoising [1,2], person re-identification [3,4] and vision recognition [5,6,7,8].Generally speaking, the DL methods are developed based on a basic hypothesis, which are that a test signal can be well approximated using the linear combination of some atoms in a dictionary

  • We proposed the locality preserving and label-aware constraint-based hybrid dictionary learning (LPLCHDL) method for image classification, which is composed of a label-aware constraint, a group regularization and a locality constraint

  • We can see that the hybrid dictionary learning methods including DL-COPAR [16], Low-Rank Shared Dictionary Learning (LRSDL) [17] and CLSDDL [19] outperform the remaining competing approaches, and the proposed LPLC-HDL method achieves the best recognition accuracy of 97.01%, which illustrates that our method can distinguish the shared and class-specific information of face images more appropriately

Read more

Summary

Introduction

Due to the insufficiency of data representation, Dictionary Learning (DL) has aroused considerable interest in the past decade and achieved much success in the various applications, such as image denoising [1,2], person re-identification [3,4] and vision recognition [5,6,7,8]. The dictionary learning approach should learn the distinctive features with the class-specific dictionary, and simultaneously exploit the common features of the correlated classes by learning a commonality dictionary To this end, we proposed the locality preserving and label-aware constraint-based hybrid dictionary learning (LPLCHDL) method for image classification, which is composed of a label-aware constraint, a group regularization and a locality constraint. The proposed LPLC-HDL method learns the hybrid dictionary by fully exploiting the locality information and the label information of the data In this way, the learned hybrid dictionary can preserve the complex structural information of the data, and have strong discriminativity for image classification.

Notation
The LCLE-DL Algorithm
The Objective Function of Hybrid DL
The Proposed Method
The Locality Constraint for Commonality Representation
The Label-Aware Constraint
The Group Regularization
Optimization Strategy of LPLC-HDL
Shared Dictionary Learning
Class-Specific Dictionary Learning
Objective function values
2: Initialization
Classification Procedure
Experiments
Experiments on the Yale Face Dataset
Method
Experiments on the Extended YaleB Face Dataset
Experiments on the LFW Face Dataset
Object Classification
Flower Classification
Parameter Sensitivity
Evaluation of Computational Time
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.