Abstract

K-singular value decomposition (K-SVD) is a frequently used dictionary learning (DL) algorithm that iteratively works between sparse coding and dictionary updating. The sparse coding process generates sparse coefficients for each training sample, and the sparse coefficients induce clustering features. In the applications like image processing, the features of different clusters vary dramatically. However, all the atoms of dictionary jointly represent the features, regardless of clusters. This would reduce the accuracy of sparse representation. To address this problem, in this study, we develop the clustering K-SVD (CK-SVD) algorithm for DL and the corresponding greedy algorithm for sparse representation. The atoms are divided into a set of groups, and each group of atoms is employed to represent the image features of a specific cluster. Hence, the features of all clusters can be utilized and the number of redundant atoms are reduced. Additionally, two practical extensions of the CK-SVD are provided. Experimental results demonstrate that the proposed methods could provide more accurate sparse representation of images, compared to the conventional K-SVD and its existing extended methods. The proposed clustering DL model also has the potential to be applied to the online DL cases.

Highlights

  • Sparse representation aims to model signals as sparse linear combinations of the atoms in a dictionary, and this technique is widely used in various fields of image processing [1,2,3,4]

  • We propose the clustering K-singular value decomposition (K-singular value decomposition (SVD)) (CK-SVD) algorithm for dictionary learning (DL) and the corresponding greedy algorithm for sparse recovery, which will be introduced

  • Process and the conventional K-SVD-based DL process, respectively, by using the training dataset, in order to verify the improvement of the CK-SVD over the conventional

Read more

Summary

Introduction

Sparse representation aims to model signals as sparse linear combinations of the atoms in a dictionary, and this technique is widely used in various fields of image processing [1,2,3,4]. Let z ∈ Rn and D ∈ Rn×q , q ≥ n denote a signal and an over-complete dictionary, respectively. The sparse representation of z with respect to the dictionary. Dis expressed as z ≈ Ds. The sparse coefficients vector s ∈ Rq satisfies s0 ≤ k and z − Ds2 ≤ , where · 0 denotes the number of non-zero entries of a vector, k and represent the maximum number of sparse coefficients and sparse representation error, respectively. The dictionaries used for sparse representation can be divided into two categories: analytical dictionaries and learned dictionaries. The analytical dictionaries like wavelet dictionaries can be universally applied, and they are easy to obtain. Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, 130022 Changchun, China

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call