Abstract

Owing to the powerful and automatic representation capabilities, deep learning (DL) techniques have made significant breakthroughs and progress in hyperspectral unmixing (HU). Among the DL approaches, autoencoders (AEs) have become a widely-used and promising network architecture. However, these AE-based methods heavily rely on manual design and may not be a good fit for specific datasets. To unmix hyperspectral images more intelligently, we propose an automatic neural architecture search model for HU, AutoNAS for short, to determine the optimal network architecture by considering channel configurations and convolution kernels simultaneously. In AutoNAS, the self-supervised training mechanism based on hyperspectral images is first designed for generating the training samples of the supernet. Then, the affine parameter sharing strategy is adopted by applying different affine transformations on the supernet weights in the training phase, which enables finding the optimal channel configuration. Furthermore, on the basis of the obtained channel configuration, the evolutionary algorithm with additional computational constraints is introduced into networks to achieve flexible convolution kernel search by evaluating unmixing results of different architectures in the supernet. Extensive experiments conducted on four hyperspectral datasets demonstrate the effectiveness and superiority of the proposed AutoNAS in comparison with several state-of-the-art unmixing algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.