Abstract

The conservation of marine resources requires constant monitoring of the underwater environment by researchers. For this purpose, visual automated monitoring systems are of great interest, especially those that can describe the environment using semantic segmentation based on deep learning. Although they have been successfully used in several applications, such as biomedical ones, obtaining optimal results in underwater environments is still a challenge due to the heterogeneity of water and lighting conditions, and the scarcity of labeled datasets. Even more, the existing deep learning techniques oriented to semantic segmentation only provide low resolution results, lacking the enough spatial details for a high performance monitoring. To address these challenges, a combined loss function based on the active contour theory and level set methods is proposed to refine the spatial segmentation resolution and quality. To evaluate the method, a new underwater dataset with pixel annotations for three classes (fish, seafloor, and water) was created using images from publicly accessible datasets like SUIM, RockFish, and DeepFish. The performance of architectures of convolutional neural networks (CNNs), such as UNet and DeepLabV3+, trained with different loss functions (cross entropy, dice, and active contours) was compared, finding that the proposed combined loss function improved the segmentation results by around 3%, both in the metric Intercept Over Union (IoU) as in Hausdorff Distance (HD).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call