Abstract

Dictionaries with separable structure reduce the computational load of sparse coding and learning algorithms and ensure that patterns present in 2D data are not broken by vectorization. We propose an adaptation of the sparse Bayesian learning (SBL) framework for sparse approximation to the 2D separable case. Our algorithm has two stages. In the first, the hierarchical prior model targets the sparsity patterns occurring in each dimension, therefore focusing on the representation structure. The underlying 2D row-column structure of the sparse support is thus recovered via two separate SBL processes. Simulations show that this recovery is obtained in considerably less iterations than the non-separable method, especially in noisy setups. Since we take advantage of the separable structure, each iteration is considerably faster. In the second stage, only few iterations of standard SBL on the reduced support are needed to obtain the signal representation.In addition to this significant improvement over SBL in numerical complexity, we demonstrate, in a series of tests carried out on synthetic data and images, that the separable formulation results also in comparable accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.