Recently, Autoencoders (AEs) have demonstrated remarkable performance in the field of hyperspectral anomaly detection, owing to their powerful capability in handling high-dimensional data. However, they often overlook the inherent global distribution characteristics and long-range dependencies in hyperspectral images (HSI). This oversight makes it challenging to accurately characterize and describe boundaries between different backgrounds and anomalies in complex HSI, thereby affecting detection accuracy. To address this issue, a robust multi-stage progressive autoencoder for hyperspectral anomaly detection (RMSAD) is proposed. Initially, a progressive multi-stage learning framework based on convolutional autoencoders is employed. This framework incrementally reveals and integrates deep contextual features along with their long-range dependencies in HSI, aiming to accurately characterize the background and anomalies. Subsequently, an innovative multi-scale fusion strategy is introduced at the intersections of each stage, reinforcing the learning and representation of background and global spatial details across multiple stages. Finally, by collectively extracting abnormal spatial information across stages, effectively reducing the tendency of autoencoders to reconstruct anomalies. This ensures the efficient restoration and replication of global textural details in HSI. The experimental results on the six HSI datasets demonstrate that the proposed RMSAD is superior to other state-of-the-art methods.
Read full abstract