Abstract

ABSTRACT Automatic ship detection in optical remote-sensing (ORS) images has wide applications in civil and military fields. Research on ship detection in ORS images started late compared to synthetic aperture radar (SAR) images, and it is difficult for traditional image-processing algorithms to achieve high accuracy. Therefore, we propose a ship-detection method based on a deep convolutional neural network that is modified from YOLOv3. We call it fused features and rebuilt (FFR) YOLOv3. We tried some improvements to enhance its performance in ship-detection regions. We added a squeeze-and-excitation (SE) structure to the backbone network to strengthen the ability to extract features. Through a large number of experiments, we optimized the backbone network to improve the speed. We improved the multi-scale detection of YOLOv3 by fusing multi-scale feature maps and regenerating them with a high-resolution network, which can improve the accuracy of detection and location. We used the public HRSC2016 ship-detection dataset and remote-sensing images collected from Google Earth to train, test, and verify our network, which reached a detection speed of about 27 frames per second (fps) on an NVIDIA RTX2080ti, with recall (R) = 95.32% and precision (P) = 95.62%. Experiments show that our network has better accuracy and speed than other methods. In addition, it has strong robustness and can adapt to complex environments like inshore ship detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call