Abstract

Convolutional models have provided outstanding performance in the analysis of hyperspectral images (HSIs). These architectures are carefully designed to extract intricate information from non-linear features for classification tasks. Notwithstanding their results, model architectures are manually engineered and further optimized for generalized feature extraction. In general terms, deep architectures are time consuming for complex scenarios since they require fine tuning. Neural architecture search (NAS) has emerged as a suitable approach to tackle this shortcoming. In parallel, modern attention-based methods have boosted the recognition of sophisticated features. The search for optimal neural architectures combined with attention procedures motivates the development of this work. This paper develops a new method to automatically design and optimize convolutional neural networks (CNNs) for HSI classification using channel-based attention mechanisms. Specifically, one-dimensional (1D) and spectral-spatial (3D) classifiers are considered to handle the large amount of information contained in HSIs from different perspectives. Furthermore, the proposed AAtt-CNN method meets the requirement to lower the large computational overheads associated with architectural search. It is compared with current state-of-the-art (SOTA) classifiers. Our experiments, conducted using a wide range of HSI images, demonstrate that AAtt-CNN succeeds in finding optimal architectures for classification, leading to SOTA results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call