Abstract

Synthetic aperture radar (SAR) is an important tool for achieving accurate and efficient monitoring of maritime traffic by identifying ships and wakes. Ship wake can not only assist in ship detection, but also invert ship navigation information, which has attracted wide attention. However, most of the work on ship wake detection is still through non-deep learning methods. Limited by factors such as unclear features, complex background interference and insufficient label information in large-scale SAR images, deep learning methods have developed slowly in the field of ship wake detection. In this paper, we propose a lightweight deep learning network for ship and wake detection. In the framework, a YOLO structure based deep learning is implemented to achieve ship and wake classification and localization. For this purpose, we initially create a Gaofen-3 SAR dataset containing about 1000 pairs of ships and wakes. To streamline the deep learning network, based on the background and pixel characteristics of SAR images, we design a combined convolutional module in combination with an attention mechanism, which replaces the convolutional layers in the backbone to extract more diverse feature maps at a low cost. In addition, in view of the geometric features of ship wake, the optimized design is conducted in the target prediction stage, and the angle-related loss function is developed to guide the training of the network, which not only ensures the detection of the ship target, but also greatly improves the accuracy of wake line detection. Experimental results show that the developed lightweight network achieves accurate and stable detection of ships and wakes in the dataset, and it has excellent generalization ability in complex background and weak target detection. The performance is comparable to the state-of-the-art models, where the average precision of ship and wake detection reaches 77.86% and 97.29%, respectively. Particularly, the average angle error between the predicted and actual values of wake lines is reduced by 3.476°. Moreover, the low energy consumption (model size is one-sixth of YOLOv4) and real-time detection (>23.1 FPS) of the algorithm offer great potential for embedded deployment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call