Abstract

Abstract Deep learning has been shown to be effective for classification of coral in benthic imagery, and is becoming a tool for use in many monitoring programs around the world. Although deep learning accuracy for coral reef classification has been well published for studies where validation metrics are generated from data within surveys, little research has been done on the transferability and generalisability of deep learned models to data never seen by the trained model. Examples include data across multiple capture methods, camera systems, habitat types, water quality conditions and temporal studies. In this paper we investigate the use of deep ensembling to measure the reliability of predictions in new or unseen environments. In this paper we show that ensemble methods are more stable in their calibration across dataset shifts compared with other approaches. Ensembles show more robust uncertainty quantification in unseen environments compared to alternative methods, thus providing more confidence in the use of pre‐trained models in unconstrained environments. These results show that ensembles should be the de facto standard for any practitioner using deep learning for benthic image automation, and applying deep learning approaches to coral classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call