Abstract

Target detection played a vital role in radar image interpretation. The early studies usually relied on the model-driven strategy, such as the constant false alarm rate technique. This family of a method aimed to find the statistical anomaly points. The detection performance was, therefore, dependent on the statistical modeling of clutter. They suffered from the model mismatch in realistic scenarios. Recent years have witnessed a resurgence of deep neural networks, by which the high-level representations can be learned. They achieved amazing performance in many fields. However, large amounts of training samples were required to estimate the weights and the bias. For radar sensors, it is costly and difficult to collect much data with label information. To solve the problem, this article presents a model-data co-driven ship detection strategy. The clutter of radar images is first modeled by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal{ G}}$ </tex-math></inline-formula> distribution. The scale and the shape parameters are estimated by the data available. They are used to simulate the sea clutters. Second, the labeled targets available are extended to several different azimuths to construct the target set. Finally, the targets randomly chosen from the target set are embedded into the sea clutter. The synthesized images and the original ones are combined to train the deep architecture. Multiple comparative studies are performed to demonstrate the advantage of the proposed strategy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.