Abstract
Unlike land, the oceans, although covering more than 70% of the planet, are largely unexplored. Global fisheries resources are central to the sustainability and quality of life on earth but are under threat from climate change, ocean acidification and over consumption. One way to analyze these marine resource is through remote underwater surveying. However, the sheer volume of recorded data often make classification and analyses difficult, time consuming and resource intensive. Recent developments in machine learning (ML) have shown promising application in extracting high level context with near human performance on image classification tasks. The application of ML in remote underwater surveying can drastically reduce the processing time of these datasets. In order to train these deep neural networks used in ML, it is necessary to create a series of large-scale benchmark datasets to test any proposed algorithm for this kind of specific imaging classification. Currently, none of the publicly available datasets in the marine vision research domain have sufficiently large data volumes to reliably train a deep model. In this work, a publicly available large-scale benchmark underwater video dataset is created and used to retrain a state-of-the-art machine vision deep model (MaskRCNN). This model is in turn applied into detecting and classifying underwater marine lives through random under-sampling (RUS), and achieves a reasonably high average precision (0.628 mAP), indicating great applicability of this dataset in training instance segmentation deep neural network for detecting underwater marine species.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.