Abstract

This paper investigates the problem of dim frequency line detection and recovery in the so-called lofargram. Theoretically, long enough time integration can always enhance the detection characteristic. But this does not hold for irregularly fluctuating lines. Deep learning has been shown to perform very well for sophisticated visual inference tasks. With the composition of multiple processing layers, very complex high level representations that amplify the important aspects of input while suppressing irrelevant variations can be learned. Hence, DeepLofargram is proposed, composed of a deep convolutional neural network and its visualization counterpart. Plugging into specifically designed multi-task loss, an end-to-end training jointly learns to detect and recover the spatial location of potential lines. Leveraging on this deep architecture, performance limits of low SNR can be achieved as low as -24 dB on average and -26 dB for some. This is far beyond the perception of human vision and significantly improves the state-of-the-art.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.