Abstract

Abstract Dictionary learning is a challenging topic in many image processing areas. The basic goal is to learn a sparse representation from an overcomplete basis set. Due to combining the advantages of generic multiscale representations with the learning-based adaptivity, multiscale dictionary representation approaches have the power in capturing structural characteristics of natural images. However, the existing multiscale learning approaches still suffer from three main weaknesses: inadaptability to diverse scales of image data, sensitivity to noise and outliers, difficulty to determine the optimal dictionary structure. In this paper, we present a novel multiscale dictionary learning paradigm for sparse image representations based on an improved empirical mode decomposition. This powerful data-driven analysis tool for multi-dimensional signals can fully adaptively decompose the image into multiscale oscillating components according to intrinsic modes of data self. This treatment can obtain a robust and effective sparse representation, and meanwhile generates a raw dictionary at multiple geometric scales and spatial frequency bands. This dictionary is refined by selecting the optimal oscillating atom based on frequency clustering. In order to further enhance sparsity and generalization, a tolerant dictionary is learned using a coherence regularized model. A fast proximal scheme is developed to optimize this model. The multiscale dictionary is considered as the product of an oscillating dictionary and a tolerant dictionary. Experimental results demonstrate that the proposed method has superior performance compared with several competing methods for sparse image representations. We also have shown the promising results in image denoising application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call