The sparsity is an attractive property that has been widely and intensively utilized in various image processing fields (e.g., robust image representation, image compression, image analysis, etc.). Its actual success owes to the exhaustive mining of the intrinsic (or homogenous) information from the whole data carrying redundant information. From the perspective of image representation, the sparsity can successfully find an underlying homogenous subspace from a collection of training data to represent a given test sample. The famous sparse representation (SR) and its variants embed the sparsity by representing the test sample using a linear combination of training samples with L0-norm regularization and L1-norm regularization. However, although these state-of-the-art methods achieve powerful and robust performances, the sparsity is not fully exploited on the image representation in the following three aspects: 1) the within-sample sparsity, 2) the between-sample sparsity, and 3) the image structural sparsity. In this paper, to make the above-mentioned multi-context sparsity properties agree and simultaneously learned in one model, we propose the concept of consensus sparsity (Con-sparsity) and correspondingly build a multi-context sparse image representation (MCSIR) framework to realize this. We theoretically prove that the consensus sparsity can be achieved by the L∞-induced matrix variate based on the Bayesian inference. Extensive experiments and comparisons with the state-of-the-art methods (including deep learning) are performed to demonstrate the promising performance and property of the proposed consensus sparsity.
Read full abstract