Abstract

In shallow-water environments, low-frequency acoustic signals exhibit dispersive propagation due to interactions with the sea surface and seabed. The received signal can then be modeled as a set of propagating modes. Single-hydrophone modal dispersion has been used to range baleen whale vocalizations and estimate shallow-water geoacoustic properties. However, these algorithms require preliminary signal detection and human labor to estimate the modal dispersion. Here, we apply a temporal convolutional network (TCN) to time-frequency representations of baleen whale gunshots (impulsive calls) to simultaneously detect and range them in large single-hydrophone passive acoustic monitoring datasets. The TCN jointly learns ranging and detection by training using both synthetic gunshots simulated across multiple environments/ranges and experimental noise. The synthetic data is informed by only the experimental dataset’s water column depth, sound speed, and density, while other waveguide parameters vary within empirically observed bounds. The method is applied to an experimental North Pacific right whale dataset collected in the Bering Sea using a single hydrophone. To evaluate model performance, 50 calls are manually ranged using a state-of-the-art physics-based inversion method. The TCN closely matches the physics-based range estimations and detects dispersive gunshots among noise-only examples with high precision and recall. [Work supported by the ONR].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call