Abstract

AbstractIn the field of acoustic simulation, methods that are widely applied and have been proven to be highly effective rely on accurately capturing the impulse response (IR) and its convolution relationship. This article introduces a novel approach, named as UnderwaterImage2IR, that generates acoustic IRs from underwater images using dual‐path pre‐trained networks. This technique aims to achieve cross‐modal conversion from underwater visual images to acoustic information with high accuracy at a low cost. Our method utilizes deep learning technology by integrating dual‐path pre‐trained networks and conditional generative adversarial networks conditional generative adversarial networks (CGANs) to generate acoustic IRs that match the observed scenes. One branch of the network focuses on the extraction of spatial features from images, while the other is dedicated to recognizing underwater characteristics. These features are fed into the CGAN network, which is trained to generate acoustic IRs corresponding to the observed scenes, thereby achieving high‐accuracy acoustic simulation in an efficient manner. Experimental results, compared with the ground truth and evaluated by human experts, demonstrate the significant advantages of our method in generating underwater acoustic IRs, further proving its potential application in underwater acoustic simulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call