Abstract

The you-only-look-once (YOLO) model identifies objects in complex images by framing detection as a regression problem with spatially separated boundaries and class probabilities. Object detection from complex images is somewhat similar to underwater source detection from acoustic data, e.g., time-frequency distributions. Herein, YOLO is modified for joint source detection and azimuth estimation in a multi-interfering underwater acoustic environment. The modified you-only-look-once (M-YOLO) input is a frequency-beam domain (FBD) sample containing the target and multi-interfering spectra at different azimuths, generated from the received data of a towed horizontal line array. M-YOLO processes the whole FBD sample using a single-regression neural network and directly outputs the target-existence probability and spectrum azimuth. Model performance is assessed on both simulated and at-sea data. Simulation results reveal the strong robustness of M-YOLO toward different signal-to-noise ratios and mismatched ocean environments. As tested on the data collected in an actual multi-interfering environment, M-YOLO achieved near-100% target detection and a root mean square error of 0.54° in azimuth estimation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.