Abstract

Marine cabled video-observatories allow the non-destructive sampling of species at frequencies and durations that have never been attained before. Nevertheless, the lack of appropriate methods to automatically process video imagery limits this technology for the purposes of ecosystem monitoring. Automation is a prerequisite to deal with the huge quantities of video footage captured by cameras, which can then transform these devices into true autonomous sensors. In this study, we have developed a novel methodology that is based on genetic programming for content-based image analysis. Our aim was to capture the temporal dynamics of fish abundance. We processed more than 20,000 images that were acquired in a challenging real-world coastal scenario at the OBSEA-EMSO testing-site. The images were collected at 30-min. frequency, continuously for two years, over day and night. The highly variable environmental conditions allowed us to test the effectiveness of our approach under changing light radiation, water turbidity, background confusion, and bio-fouling growth on the camera housing. The automated recognition results were highly correlated with the manual counts and they were highly reliable when used to track fish variations at different hourly, daily, and monthly time scales. In addition, our methodology could be easily transferred to other cabled video-observatories.

Highlights

  • Recent technological progress has rapidly advanced the exploration of the world’s oceans, opening up new possibilities to address questions related to the variety, distinctiveness and complexity of marine life

  • Many automated recognition and classification approaches have been experimented and validated on the Fish4K-knowledge (F4K) repository[22], which only provides underwater images acquired during the daylight in oligotrophic and transparent coral reef waters. These automated approaches span a wide range of topics, from statistics[23] to convolutional neural networks[13,24,25] and unsupervised machine learning[26]

  • We developed a novel methodology for automated fish recognition and counting at a cabled video-observatory, which allowed us to take into account a variety of operating circumstances that included wide variations in light intensity, turbidity, fouling growth and dense fish assemblages

Read more

Summary

Introduction

Increasing efforts are being made to implement the use of underwater video cameras Despite their high deployment and maintenance costs[15], installing cameras coupled with other biogeochemical and physical sensors allows cabled observatories to provide powerful devices for quantifying biotic components at time frequencies that span from seconds to hours, months, and even years, producing a huge amount of data that urgently needs appropriate methodologies for an effective automated processing[16]. Many automated recognition and classification approaches have been experimented and validated on the Fish4K-knowledge (F4K) repository[22], which only provides underwater images acquired during the daylight (i.e., from the sunrise to the sunset) in oligotrophic and transparent coral reef waters These automated approaches span a wide range of topics, from statistics[23] to convolutional neural networks[13,24,25] and unsupervised machine learning[26]. Ad-hoc aquaculture devices have been employed to force the fishes to swim frontally to the video cameras[6]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.