Abstract

Hyperspectral image (HSI) classification using convolutional neural networks (CNNs) has always been a hot topic in the field of remote sensing. This is owing to the high level feature extraction offered by CNNs that enables efficient encoding of the features at several stages. However, the drawback with CNNs is that for exceptional performance, they need a deeper and wider architecture along with humongous amount of training data, which is often impractical and infeasible. Furthermore, the reliance on just forward connections leads to inefficient information flow that further limits the classification. Hence, to mitigate these issues, we propose a self-looping convolution network for more efficient HSI classification. In our method, each layer in a self-looping block contains both forward and backward connections, which means that each layer is the input and the output of every other layer, thus forming a loop. These loopy connections within the network allow for maximum information flow, thereby giving us a high level feature extraction. The self-looping connections enable us to efficiently control the network parameters, further allowing us to go for a wider architecture with a multiscale setting, thus giving us abstract representation at different spatial levels. We test our method on four benchmark hyperspectral datasets: Two Houston hyperspectral datasets (DFC 2013 and DFC 2018), Salinas Valley dataset and combined Pavia University and Centre datasets, where our method achieves state of the art performance (highest percentage kappa of 87.28%, 71.08%, 99.24% and 68.44% respectively for the four datasets).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call