Abstract

Sparse unmixing is a promising approach that is formulated as a linear regression problem by assuming that observed signatures can be expressed as a linear combination of a few endmembers in the spectral library. Under this formulation, a novel regularized multiple sparse Bayesian learning model, which is constructed via Bayesian inference with the conditional posterior distributions of model parameters under a hierarchical Bayesian model, is proposed to solve the sparse unmixing problem. Then, the total variation regularization and the non-negativity constraint are incorporated into the model, thus exploiting the spatial information and the physical property in hyperspectral images. The optimal problem of the model is decomposed into several simpler iterative optimization problems that are solved via the alternating direction method of multipliers, and the model parameters are updated adaptively from the algorithm. Experimental results on both synthetic and real hyperspectral data demonstrate that the proposed method outperforms the other algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.