Abstract

In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation [ e.g., expectation-maximization (EM)] problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.

Highlights

  • O RTHOGONAL function representations, introduced in the nineteenth century, are still a powerful tool in signal analysis

  • This paper introduces a new algorithm for constrained dictionary learning which is very flexible and can use different constraints on the dictionary

  • In the majorization method we introduce an auxiliary parameter which is distinguished with a double dagger superscript, e.g. X‡

Read more

Summary

INTRODUCTION

O RTHOGONAL function representations, introduced in the nineteenth century, are still a powerful tool in signal analysis. An advantage of the given method is that it optimizes a joint parameter objective function of the sparse coefficient matrix and the dictionary In this framework, it is possible to choose a better path from the initial to the learnt dictionary by reducing the objective in different directions (coefficients or dictionary) in a cyclic way. It is possible to choose a better path from the initial to the learnt dictionary by reducing the objective in different directions (coefficients or dictionary) in a cyclic way This prevents oscillations of the sequence of updates around the optimal path and makes the algorithm more suitable for large scale problems, for which the calculation of sparse approximations of the training samples is often impossible. ||.||F are spectral and Frobenius norm in the Euclidean vector space respectively. ||.||p : 0 < p ≤ 1 is the lp quasinorm (

DICTIONARY LEARNING METHODS
Sparse Approximation
Dictionary Update
Previously Suggested Dictionary Update Methods
DICTIONARY LEARNING WITH THE MAJORIZATION METHOD
Majorization Minimization Method
Matrix Valued Sparse Approximation
Jointly Sparse Dictionaries
Generalized block relaxation method for dictionary learning
SIMULATIONS
Synthetic Data
Dictionary Learning for Sparse Audio Coding
CONCLUSIONS
Generalized Block relaxed iterative mappings and their convergence
Findings
Convergence study of the generalized block-relaxed dictionary learning
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call