Abstract

In many real data science problems, it is common to encounter a domain mismatch between the training and testing datasets, which means that solutions designed for one may not transfer well to the other due to their differences. An example of such was in the BirdCLEF2021 Kaggle competition, where participants had to identify all bird species that could be heard in audio recordings. Thus, multi-label classifiers, capable of coping with domain mismatch, were required. In addition, classifiers needed to be resilient to a long-tailed (imbalanced) class distribution and weak labels. Throughout the competition, a diverse range of solutions based on convolutional neural networks were proposed. However, it is unclear how different solution components contribute to overall performance. In this work, we contextualise the problem with respect to the previously existing literature, analysing and discussing the choices made by the different participants. We also propose a modular solution architecture to empirically quantify the effects of different architectures. The results of this study provide insights into which components worked well for this challenge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call