Abstract
Recent advances in machine learning and computer vision have enabled increased automation in benthic habitat mapping through airborne and satellite remote sensing. Here, we applied deep learning and neural network architectures in NASA NeMO-Net, a novel neural multimodal observation and training network for global habitat mapping of shallow benthic tropical marine systems. These ecosystems, particularly coral reefs, are undergoing rapid changes as a result of increasing ocean temperatures, acidification, and pollution, among other stressors. Remote sensing from air and space has been the primary method in which changes are assessed within these important, often remote, ecosystems at a global scale. However, such global datasets often suffer from large spectral variances due to the time of observation, atmospheric effects, water column properties, and heterogeneous instruments and calibrations. To address these challenges, we developed an object-based fully convolutional network (FCN) to improve upon the spatial-spectral classification problem inherent in multimodal datasets. We showed that with training upon augmented data in conjunction with classical methods, such as K-nearest neighbors, we were able to achieve better overall classification and segmentation results. This suggests FCNs are able to effectively identify the relative applicable spectral and spatial spaces within an image, whereas pixel-based classical methods excel at classification within those identified spaces. Our spectrally invariant results, based on minimally preprocessed WorldView-2 and Planet satellite imagery, show a total accuracy of approximately 85% and 80%, respectively, over nine classes when trained and tested upon a chain of Fijian islands imaged under highly variable day-to-day spectral inputs.
Highlights
M ACHINE learning in the field of computer vision has recently led to dramatic progress in areas of image classification, segmentation, and feature extraction
It has been in our experience that the Khaled bin Sultan Living Oceans Foundation (KSLOF) labels often classified large swathes as “back-reef pavement” even though there exist clear delineations between pavement and sediment areas, possibly due to the fact that a mixture of live and dead coral colonies make up such regions
We report for this study the common metrics of mean accuracy, mean precision, mean recall, and frequency-weighted intersection over union (IoU, or otherwise known as the Jaccard Index)
Summary
M ACHINE learning in the field of computer vision has recently led to dramatic progress in areas of image classification, segmentation, and feature extraction. Manuscript received May 19, 2020; revised July 27, 2020; accepted August 11, 2020. Date of publication August 24, 2020; date of current version September 16, 2020. In the field of remote sensing, data-rich sources such as daily satellite imagery over the entirety of Earth’s surface have steadily become increasingly prevalent and available. The confluence of these factors has led to interest in incorporating deep learning and sensor fusion methods with the traditional remote sensing methodology, to automate the classification and interpretation of Earth Observation Systems (EOS) data. Remote sensing is the primary method in which global-scale observations are made regarding the Earth system barring direct physical contact. Previous examples include the hyperspectral imager for the coastal ocean (HICO) [15], IKONOS [16], Sentinel-2[17], Worldview-2 [18], and the Landsat series [19], while newer instruments include HISUI [20] and DESIS [21] aboard the international space station (ISS)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.