Abstract

Deep neural networks (DNNs) have revolutionized the way remotely sensed hyperspectral image (HSI) data are managed and processed. For instance, residual networks (ResNets) have achieved high classification accuracy by applying sequential transformations (layer by layer) on the input HSI data, obtaining highly discriminative data representations. However, these models are quite complex, with significant requirements in terms of memory resulting from the large number of parameters that they need to learn, which also leads to potential overfitting issues. In this work, we specifically address the aforementioned problem by re-interpreting a DNN (the ResNet) as a continuous transformation, instead of the traditional (discrete) step-by-step approach. To achieve this, we combine ordinary differential equations (ODEs) with DNN architectures for the first time in the HSI data classification literature. This allows us to perform remotely sensed HSI data classification in an efficient way in terms of number of parameters. Our experimental results, conducted using two well-known HSI data sets, indicate that the inclusion of ODEs in the architecture of DNNs offers significant advantages when processing and classifying this kind of high-dimensional data, achieving better performance even with less training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call