Abstract
The classification of underwater acoustic targets is a challenging task due to the intricacies involved in soundscape, huge amounts of background interference, and the varied features of underwater targets. Various approaches have been explored for classifying underwater targets based on acoustic signatures. The problem with these conventional methods lies in the extensive domain-specific knowledge they demand for effective feature engineering. Alternatively, the deep learning approach is of great appeal for assistance to manual sonar operators who mainly rely on their expertise for the classification of targets. This work leverages the approach of deep neural networks for underwater acoustic target categorization, aiming to analyze whether we can reduce the need for extensive domain knowledge required, by letting deep learning algorithms itself find the deep audio embeddings. This approach implements an audio classifier designed to classify acoustic targets with input as log Mel-spectrograms. To compare the performance of this classifier, we also implement a regular neural network for audio classification with MFCCs as input features. The overall results of the model are promising.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.