Abstract

Underwater target detection holds a noteworthy role in the field of marine exploration. However, it is difficult to extract useful feature information from blurred images with complex backgrounds, resulting in suboptimal and unsatisfactory target detection in conventional models. Among them, YOLOv5 leverages the advantages of fast detection performs better in detecting underwater samples. Nevertheless, YOLOv5 still faces difficulties including missed and incorrect detections due to the underwater environment's small scale of objects, dense distribution of organisms, and occlusion. To address these challenges, we propose a novel YoLoWaternet (YWnet) model that builds upon the YOLOv5 framework for complex underwater species detection with three main innovations: 1) A convolutional block attention module (CBAM) is introduced to enhance feature extraction for blurry images in the initial stages of the network and a new feature fusion network called the CRFPN is created to transfer important information and detect underwater targets. 2) A novel feature extraction module is presented, namely, the skip residual C3 module (SRC3), by effectively merging information from various scales to minimize the loss of original data during transmission. 3) Regression and classification algorithms are separated using the decoupled head to improve the effectiveness of detection and the EIoU loss function is employed to accelerate the convergence speed. Finally, the experimental results demonstrate that YWnet achieves remarkable accuracies of 73.2% mAp and 39.3% mAp50–95 on the underwater dataset, surpassing YOLOv5 by 2.3% and 2.4%, respectively. Furthermore, the proposed fusion model outperforms nine state-of-the-art baseline models on the undersea dataset and has generalization capabilities in other datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call