Abstract

In many statistical hyperspectral unmixing approaches, the unmixing task is essentially an optimization problem given a defined linear or nonlinear spectral mixture model. However, most of the model inference algorithms require a time-consuming iterative procedure. On the other hand, neural networks have been recently used to estimate abundances given some training samples, or directly estimate endmembers and abundances simultaneously in an unsupervised setting. However, their disadvantages are clear: lack of interpretability and reliance on the large training set. Model-inspired neural networks are constructed by the problem model and its corresponding inference algorithm. It incorporates the prior knowledge of physical model and algorithm into network architecture, combining the advantages of model-based and learning-based methods. This article deeply unfolds the linear mixture model and the corresponding iterative shrinkage-thresholding algorithm (ISTA) to build two unmixing network architectures. The first assumes that the set of endmembers are known, and the deep unfolded ISTA model is only for abundance estimation; and the second is used for blind unmixing to estimate both endmembers and abundances at the same time. The networks can be trained by supervised and unsupervised schemes, respectively, with a small-size training set, and then, unmixing becomes a feedforward process, which is very fast since no iteration is required. The experimental results show their competitive performance compared with the state-of-the-art unmixing approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call