Abstract

In view of the complexity and diversity of hyperspectral images (HSIs), the classification task has been a major challenge in the field of remote sensing image processing. Hyperspectral classification (HSIC) methods based on neural architecture search (NAS) is a current attractive frontier that not only automatically searches for neural network architectures best suited to the characteristics of HSI data, but also avoids the possible limitations of manual design of neural networks when dealing with new classification tasks. However, the existing NAS-based HSIC methods have the following limitations: (1) the search space lacks efficient convolution operators that can fully extract discriminative spatial–spectral features, and (2) NAS based on traditional differentiable architecture search (DARTS) has performance collapse caused by unfair competition. To overcome these limitations, we proposed a neural architecture search method with receptive field spatial–spectral attention (RFSS-NAS), which is specifically designed to automatically search the optimal architecture for HSIC. Considering the core needs of the model in extracting more discriminative spatial–spectral features, we designed a novel and efficient attention search space. The core component of this innovative space is the receptive field spatial–spectral attention convolution operator, which is capable of precisely focusing on the critical information in the image, thus greatly enhancing the quality of feature extraction. Meanwhile, for the purpose of solving the unfair competition issue in the traditional differentiable architecture search (DARTS) strategy, we skillfully introduce the Noisy-DARTS strategy. The strategy ensures the fairness and efficiency of the search process and effectively avoids the risk of performance crash. In addition, to further improve the robustness of the model and ability to recognize difficult-to-classify samples, we proposed a fusion loss function by combining the advantages of the label smoothing loss and the polynomial expansion perspective loss function, which not only smooths the label distribution and reduces the risk of overfitting, but also effectively handles those difficult-to-classify samples, thus improving the overall classification accuracy. Experiments on three public datasets fully validate the superior performance of RFSS-NAS.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.