ABSTRACT Due to the complex interaction of photons between materials, linear hyperspectral unmixing (HU) is limited in real scenarios, making nonlinear unmixing a promising alternative. With the advancement of deep learning (DL), nonlinear unmixing methods based on the convolutional autoencoder (CAE) have gained considerable traction in HU. However, these unmixing methods struggle to integrate spectral and spatial information while reducing the loss of material details during the unmixing process. Therefore, we propose a cascaded hybrid CAE nonlinear unmixing network, called CHCANet, which effectively leverages convolutional combinations to deeply explore the spectral-spatial information from hyperspectral data and preserve the material details through self-perception. Specifically, each CAE in CHCANet combines 1-D and 2-D convolutions, fully utilizing the flexibility and simplicity of 1-D convolutions to capture spectral features and the spatial correlation handling capability of the 2-D convolution. Moreover, we apply the self-perception mechanism to the nonlinear HU task, which can establish the cycle consistency of the network, strengthen mutual connections between encoders, and effectively preserve high-level semantic information. Following this, the optimized self-perception loss further enhances CHCANet’s perception capability of nonlinear components and strengthens the connection between the decoder directly associated with image reconstruction. Extensive experiments on synthetic and real datasets demonstrate the effectiveness of CHCANet and show excellent competitiveness compared to state-of-the-art unmixing methods.
Read full abstract