Abstract

Space-based infrared tiny ship detection aims at separating tiny ships from the images captured by Earth-orbiting satellites. Due to the extremely large image coverage area (e.g., thousands of square kilometers), candidate targets in these images are much smaller, dimer, and more changeable than those targets observed by aerial- and land-based imaging devices. Existing short imaging distance-based infrared datasets and target detection methods cannot be well adopted to the space-based surveillance task. To address these problems, we develop a space-based infrared tiny ship detection dataset (namely, NUDT-SIRST-Sea) with 48 space-based infrared images and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$17\,598$ </tex-math></inline-formula> pixel-level tiny ship annotations. Each image covers about <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$10\,000$ </tex-math></inline-formula> km <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> of area with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$10 \ 000\,\, \times \ 10 \ 000$ </tex-math></inline-formula> pixels. Considering the extreme characteristics (e.g., small, dim, and changeable) of those tiny ships in such challenging scenes, we propose a multilevel TransUNet (MTU-Net) in this article. Specifically, we design a vision Transformer (ViT) convolutional neural network (CNN) hybrid encoder to extract multilevel features. Local feature maps are first extracted by several convolution layers and then fed into the multilevel feature extraction module [multilevel ViT module (MVTM)] to capture long-distance dependency. We further propose a copy–rotate–resize–paste (CRRP) data augmentation approach to accelerate the training phase, which effectively alleviates the issue of sample imbalance between targets and background. Besides, we design a FocalIoU loss to achieve both target localization and shape description. Experimental results on the NUDT-SIRST-Sea dataset show that our MTU-Net outperforms traditional and existing deep learning-based single-frame infrared small target (SIRST) methods in terms of probability of detection, false alarm rate, and intersection over union. Our code is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/TianhaoWu16/Multi-level-TransUNet-for-Space-based-Infrared-Tiny-ship-Detection</uri>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call