Abstract

The underwater environment is filled with various sounds, with its soundscape composed of biological, geographical, and anthropological sounds. Our work focused on developing a novel method to observe and classify these sounds, enriching our understanding of the underwater ecosystem. We constructed a biologging system allowing near-real-time observation of underwater soundscapes. Utilizing deep-learning-based edge processing, this system classifies the sources of sounds, and upon the tagged animal surfacing, it transmits positional data, results of sound source classification, and sensor readings such as depth and temperature. To test the system, we attached the logger to sea turtles (Chelonia mydas) and collected data through a cellular network. The data provided information on the location-specific sounds detected by the sea turtles, suggesting the possibility to infer the distribution of specific species of organisms over time. The data showed that not only biological sounds but also geographical and anthropological sounds can be classified, highlighting the potential for conducting multi-point and long-term observations to monitor the distribution patterns of various sound sources. This system, which can be considered an autonomous mobile platform for oceanographic observations, including soundscapes, has significant potential to enhance our understanding of acoustic diversity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call