Abstract

Current sonar image recognition methods excel in closed-set and balanced scenarios, but real underwater data often follow an open-set and long-tailed distribution, leading to misclassifications, especially among tail classes. Although open-set long-tail recognition (OLTR) tasks have received attention in natural images in recent years, there has been a lack of systematic research in sonar images. To address this gap, we present the first comprehensive study and analysis of open-set long-tail recognition in sonar images (Sonar-OLTR). In this paper, we establish a Sonar-OLTR benchmark by introducing the Nankai Sonar Image Dataset (NKSID), a new collection of 2617 real-world forward-looking sonar images. We investigate the challenges posed by long-tail distributions in existing open-set recognition (OSR) evaluation metrics for sonar images and propose two improved evaluation metrics. Using this benchmark, we conduct a thorough examination of state-of-the-art OSR, long-tail recognition, OLTR, and out-of-distribution detection algorithms. Additionally, we propose a straightforward yet effective integrated Sonar-OLTR approach as a new baseline. This method introduces a Push the right Logit Up and the wrong logit Down (PLUD) loss to increase feature space margins between known and unknown classes, as well as head and tail classes within known classes. Extensive experimental evaluation based on the benchmark demonstrates the performance and speed advantages of PLUD, providing insights for future Sonar-OLTR research. The code and dataset are publicly available at https://github.com/Jorwnpay/Sonar-OLTR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call