Abstract

Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to “traditional” annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections.

Highlights

  • The contributions of this paper can be summarized as follows: (1) We present a machine learning assisted method for image annotation that allows faster manual image annotation than methods that were used before, (2) we are the first to present the use of Mask R-CNN in the context of marine environmental monitoring and exploration, and (3) we present a detailed analysis of the manual annotation speed with three image collections featuring different types of background, imaging equipment and object classes

  • We evaluated the segmentation with recall = TPθ(TPθ + FNθ)−1, (TPθ: number of objects of interest (OOI) contained in interesting regions, FNθ: number of OOI not contained in an interesting region) and precision = TPρ(TPρ + FPρ)−1, (TPρ: number of interesting regions containing an OOI, FPρ: number of interesting regions not containing an OOI)

  • We presented Machine learning Assisted Image Annotation (MAIA), a novel machine learning assisted method for image annotation in environmental monitoring and exploration

Read more

Summary

Introduction

A machine learning assisted image annotation method for environmental monitoring and exploration. 603418 and the European Community’s ECO2 project grant agreement no. This work contributes to Natural Environment Research Council “Autonomous Ecological Surveying of the Abyss” project (NERC grant NE/H021787/1) and the NERC Climate Linked Atlantic Sector Science (CLASS) programme. Funding was received from the Horizon 2020 projects EMSOLink (No 731036) and STEMM-CCS of the European Commission (No 654462)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.