Abstract

Hyperspectral images (HSIs) provide abundant spectral and spatial information, playing an irreplaceable role in land-cover classification. Recently, based on deep learning (DL) technologies, an increasing number of HSI classification approaches have been proposed, which demonstrate promising performance. However, previous studies suffer from two major drawbacks: 1) the architecture of most DL models is manually designed, relies on specialized knowledge, and is relatively tedious. Moreover, in HSI classifications, datasets captured by different sensors have different physical properties. Correspondingly, different models need to be designed for different datasets, which further increases the workload of designing architectures and 2) the mainstream framework is a patch-to-pixel framework. The overlap regions of patches of adjacent pixels are calculated repeatedly, which increases computational cost and time cost. In addition, the classification accuracy is sensitive to the patch size, which is artificially set based on extensive investigation experiments. To overcome the issues mentioned above, we first propose a 3-D asymmetric neural network search algorithm and leverage it to automatically search for efficient architectures for HSI classifications. By analyzing the characteristics of HSIs, we specifically build a 3-D asymmetric decomposition search space, where spectral and spatial information is processed with different decomposition convolutions. Furthermore, we propose a new fast classification framework, i.e., pixel-to-pixel classification framework, which has no repetitive operations and reduces the overall cost. Experiments on three public HSI datasets captured by different sensors demonstrate the networks designed by our 3-D asymmetric neural architecture search (3-D-ANAS) achieve competitive performance compared to several state-of-the-art methods, while having a much faster inference speed. Code is available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/hkzhang91/3D-ANAS</uri> .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.