Abstract

This paper presents the MSSFF (multistage spectral–spatial feature fusion) framework, which introduces a novel approach for semantic segmentation from hyperspectral imagery (HSI). The framework aims to simplify the modeling of spectral relationships in HSI sequences and unify the architecture for semantic segmentation of HSIs. It incorporates a spectral–spatial feature fusion module and a multi-attention mechanism to efficiently extract hyperspectral features. The MSSFF framework reevaluates the potential impact of spectral and spatial features on segmentation models and leverages the spectral–spatial fusion module (SSFM) in the encoder component to effectively extract and enhance these features. Additionally, an efficient Transformer (ET) is introduced in the skip connection part of deep features to capture long-term dependent features and extract global spectral–spatial information from the entire feature map. This highlights the significant potential of Transformers in modeling spectral–spatial feature maps within the context of hyperspectral remote sensing. Moreover, a spatial attention mechanism is adopted in the shallow skip connection part to extract local features. The framework demonstrates promising capabilities in hyperspectral remote sensing applications. The conducted experiments provide valuable insights for optimizing the model depth and the order of feature fusion, thereby contributing to the advancement of hyperspectral semantic segmentation research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call