Abstract

Convolutional neural networks (CNNs) have found applications in ship detection from synthetic aperture radar (SAR) images. However, there are some challenges hamper their advance. First, the detected bounding boxes are not very compact. Second, there are quite a few missing detections for small and densely clustered ships. Third, objects with analogical scatterings on land are detected as ships by making mistake. This is due to: 1) the CNN-based SAR ship detectors cannot utilize the spatial information very sufficiently; 2) features learned from CNNs only describe SAR images in space domain while neglecting the information hidden in frequency domain; and 3) information contained in the meta-data file, which may link to other sources, is not taken into account. To overcome these problems, in this paper, a cascade coupled CNN-guided (3C2N-guided) visual attention method for SAR ship detection is proposed. This method considers the newly presented 3C2N model as a qualified ship proposal generator because the images’ spatial information is utilized more sufficiently. The 3C2N model, with coupled CNN as the baseline, consists of a sequence of cascade detectors for training. Complementally, a pulse cosine transformation-based visual attention model in frequency domain is operated on the adaptive regions for ship discrimination. This could further refine the proposals’ locations and could significantly reduce the missing detections and false alarms. In addition, the digital elevation model data are adopted to remove ship-like targets on land. Experimental evaluations on 25 Sentinel-1 images demonstrate that the proposed method is superior to the previous state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call