Abstract
NASA NeMO-Net, The Neural Multimodal Observation and Training Network for global coral reef assessment, is a convolutional neural network (CNN) that generates benthic habitat maps of coral reefs and other shallow marine ecosystems. To segment and classify imagery accurately, CNNs require curated training datasets of considerable volume and accuracy. Here, we present a citizen science approach to create these training datasets through a novel 3D classification game for mobile and desktop devices. Leveraging citizen science, the NeMO-Net video game generates high-resolution 3D benthic habitat labels at the subcentimeter to meter scales. The video game trains users to accurately identify benthic categories and semantically segment 3D scenes captured using NASA airborne fluid lensing, the first remote sensing technology capable of mitigating ocean wave distortions, as well as in situ 3D photogrammetry and 2D satellite remote sensing. An active learning framework is used in the game to allow users to rate and edit other user classifications, dynamically improving segmentation accuracy. Refined and aggregated data labels from the game are used to train NeMO-Net’s supercomputer-based CNN to autonomously map shallow marine systems and augment satellite habitat mapping accuracy in these regions. We share the NeMO-Net game approach to user training and retention, outline the 3D labeling technique developed to accurately label complex coral reef imagery, and present preliminary results from over 70,000 user classifications. To overcome the inherent variability of citizen science, we analyze criteria and metrics for evaluating and filtering user data. Finally, we examine how future citizen science and machine learning approaches might benefit from label training in 3D space using an active learning framework. Within 7 months of launch, NeMO-Net has reached over 300 million people globally and directly engaged communities in coral reef mapping and conservation through ongoing scientific field campaigns, uninhibited by geography, language, or physical ability. As more user data are fed into NeMO-Net’s CNN, it will produce the first shallow-marine habitat mapping products trained on 3D subcm-scale label data and merged with m-scale satellite data that could be applied globally when data sets are available.
Highlights
Marine ecosystems are in the midst of a conservation crisis
We examine how future citizen science and machine learning approaches might benefit from label training in 3D space using an active learning framework
NeMO-Net leverages multispectral 2D and 3D datasets from in situ photogrammetry captured by divers or snorkelers at the cm-scale (3D RGB images), airborne fluid lensing at the cm-scale (2D and 3D RGB images), and commercial and governmental satellites sources at the m-decameter scale (2D eight visibleband multispectral images)
Summary
Marine ecosystems are in the midst of a conservation crisis. Coral reefs, in particular, are facing degradation from climate change, disease, and other stressors faster than they are able to regenerate (Hughes, 1994; Bellwood et al, 2004). In an effort to better understand these ecosystems and coordinate an effective response to this crisis, projects such as the Khaled bin Sultan Living Oceans Foundation, under the auspices of their Global Reef Expedition (Purkis et al, 2019), the Allen Coral Atlas (Allen Coral Atlas, 2020; Lyons et al, 2020), the Millennium Coral Reef Mapping Project (Andréfouët et al, 2004), and NASA’s COral Reef Airborne Laboratory mission (Hochberg and Gierach, 2020) have imaged large portions of the world’s coral reefs using satellite/airborne sensors and in situ photogrammetry Instruments such as NASA’s FluidCam, fluid lensing technology, and MiDAR provide a means to eliminate refractive ocean wave distortion (Chirayath and Earle, 2016; Purkis, 2018; Chirayath, 2019; Tavares, 2020), enabling airborne campaigns to generate sub-centimeter, 3D photogrammetry of shallow marine ecosystems over regional scales (Silver, 2019). Utilizing CNNs for the purpose of marine mapping has shown encouraging results (King et al, 2018; Akbari et al, 2020), but systems that exhibit a high degree of taxonomic and geomorphological diversity, such as coral reefs, still require training sets of considerable size for a CNN to segment and classify imagery accurately (Jansen and Zhang, 2007; Chirayath and Instrella, 2019)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.