Abstract

Synthetic aperture radar (SAR) ship detection is a heated and challenging problem. Traditional methods are based on hand-crafted feature extraction or limited shallow-learning features representation. Recently, with the excellent ability of feature representation, deep neural networks such as faster region based convolution neural network (FRCN) have shown great performance in object detection tasks. However, several challenges limit the applications of FRCN in SAR ship detection: (1) FRCN with a fixed receptive field cannot match the scale variability of multiscale SAR ship objects, and the performance degrade when the objects are small; (2) as a two-stage detector, FRCN performs an intensive computation and leads to low-speed detection; (3) when the background is complex, the imbalance of easy and hard examples will lead to a high false detection. To tackle the above issues, we design a multilayer fusion light-head detector (MFLHD) for SAR ship detection. Instead of using a single feature map, shallow high-resolution and deep semantic feature are combined to produce region proposal. In detection subnetwork, we propose a light-head detector with large-kernel separable convolution and position sensitive pooling to improve the detection speed. In addition, we adapt focal loss to loss function and training more hard examples to reduce the false alarm. Extensive experiments on SAR ship detection dataset (SSDD) show that the proposed method achieves superior performance in SAR ship detection both in accuracy and speed.

Highlights

  • Synthetic aperture radar (SAR) is a coherent imaging technology that provides high-resolution, all-day, and all-weather images [1,2]

  • Two experiments are designed to explore the effect of multilayer fusion and the influence of light-head design

  • The comparison with other methods indicates the outperformance of the proposed method

Read more

Summary

Introduction

Synthetic aperture radar (SAR) is a coherent imaging technology that provides high-resolution, all-day, and all-weather images [1,2]. Methods based on CFAR require high contrast between the target and background clutter in the SAR image, and it is based on the assumption that the statistical distribution. Miao Kang et al [20] presented a small sized ships detection framework which fuses the deep semantic and shallow high-resolution features, taking the additional contextual features to provide complementary information for classification and help to rule out false alarms. The dominance of easy examples during training makes it difficult for the detector to detect hard examples, and leads to a high false detection To address these issues, inspired by [23], we propose a multilayer fusion light-head detector to detect multiscale objects. To realize multiscale SAR ship detection, the proposed method fuses the shallow high-resolution and deep semantic features to generate region proposal.

Proposed Method
Backbone Network
RPN Subnetwork
Multilayer Fusion
Region Proposal Network
Loss Function
Detection Subnetwork
Large-Kernel Separable Convolution
Position-Sensitive RoI-Pooling
Experiments and Results
Experimental Dataset and Settings
Experimental Settings
Evaluation Indicators
The Influence of Backbone Network
The Influence of Multilayer Fusion
The Influence of Parameter γ in Focal loss
Experiments on SSDD
Experiments on Sentinel-1 Images
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call