Abstract
Abstract. Interpretable and explainable machine learning have proven to be promising approaches to verify the quality of a data-driven model in general as well as to obtain more information about the quality of certain observations in practise. In this paper, we use these approaches for an application in the marine sciences to support the monitoring of whales. Whale population monitoring is an important element of whale conservation, where the identification of whales plays an important role in this process, for example to trace the migration of whales over time and space. Classical approaches use photographs and a manual mapping with special focus on the shape of the whale flukes and their unique pigmentation. However, this is not feasible for comprehensive monitoring. Machine learning methods, especially deep neural networks, have shown that they can efficiently solve the automatic observation of a large number of whales. Despite their success for many different tasks such as identification, further potentials such as interpretability and their benefits have not yet been exploited. Our main contribution is an analysis of interpretation tools, especially occlusion sensitivity maps, and the question of how the gained insights can help a whale researcher. For our analysis, we use images of humpback whale flukes provided by the Kaggle Challenge ”Humpback Whale Identification”. By means of spectral cluster analysis of heatmaps, which indicate which parts of the image are important for a decision, we can show that the they can be grouped in a meaningful way. Moreover, it appears that characteristics automatically determined by a neural network correspond to those that are considered important by a whale expert.
Highlights
Interpretable and explainable machine learning has gained momentum in recent years, especially with regard to the development of various methods for a better understanding of complex processes in neural networks (Samek, Muller, 2019)
This is the basis for creating reliable models for practical use and for supporting the user in the application of these models and providing additional information that cannot be obtained by the machine learning model alone
5.1 Experiment 1: Comparison Occlusion Sensitivity Maps (OSM) to Grad-CAM. This experiment shows a comparison of OSM and Grad-CAM heatmaps
Summary
Interpretable and explainable machine learning has gained momentum in recent years, especially with regard to the development of various methods for a better understanding of complex processes in neural networks (Samek, Muller, 2019). For whale experts in the differentiation of the individual whales (see Fig. 1): Pigmentation-based surface features These characteristics are the most obvious to the human eye, apart from large disfiguring shape features They can change significantly in the first months, up to a couple years, of the whale’s life, and in extremely cold water (Antarctica especially, Greenland and the far North Atlantic to a lesser extent). Our main contribution is the analysis of the generated heatmaps and the question to what extent the quality of the model and certain observations can be verified with them and how the findings can help a whale researcher This includes the research question of whether the identification of whales by a neural network uses the same image characteristics as those considered important by whale experts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.