Abstract

Nonnegative matrix factorization (NMF) is a widely used method of hyperspectral unmixing (HU) since it can simultaneously decompose the hyperspectral data matrix into two nonnegative matrices. While traditional NMF cannot guarantee the sparsity of the decomposition results and remain the geometric structure during the decomposition. On the other hand, deep learning, with carefully designed multi-layer structures, has shown great potential in learning data representation and been widely used in many fields. In this paper, we proposed a graph-regularized and sparsityconstrained deep NMF (GSDNMF) for hyperspectral unmixing. The deep NMF structure was acquired by unfolding NMF into multiple layers. To improve the unmixing performance, the L1 regularizers of both the endmember and abundance matrices were used to add sparsity constraint. And the graph regularization term in each layer was also incorporated to remain the geometric structure. Since the model is a multi-factor NMF problem, it is difficult to optimize all the factors simutaneously. In order to acquire better intializations for the model, we proposed a layer-wise pretraining strategy to initialize the deep network based on the efficient NMF solver, NeNMF. An alternative update algorithm was also proposed to further fine-tune the network to obtain the final decompositon results. Experiments on both the synthetic data and real data demonstrate that our algorithm outperforms several state-of-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call