Separating incoherent sound sources within complex acoustical fields presents a significant challenge in acoustic imaging. Existing methods, such as Principal Component Analysis (PCA) applied to the Cross-Spectral Matrix (CSM), yield 'virtual' sources based on statistical orthogonality. However, this approach often fails to accurately identify distinct physical sources, primarily due to its reliance on statistical orthogonality solely. A state-of-the-art method involves computing a rotation matrix to enforce criteria such as least spatial entropy, or spatial orthogonality among sources, a process that, while effective, significantly increases computational complexity and time. This work introduces a hybrid approach combining PCA and deep learning for predicting spatially disjoint source maps from virtual sources. By simulating sound sources in random quantities and locations, we train a neural network tailored to this task. We address the order mismatch between PCA-derived virtual sources and pre-simulated labels by framing source separation as a set prediction problem, utilizing the Hungarian loss for efficient mismatch resolution. This method simplifies the separation process, offering faster post-training computations and eliminating the need for complex optimizations. Validation in simulated environments and real-world datasets has shown the model's effectiveness in source separation for acoustic imaging, indicating the potential of integrating deep learning with existing methods.
Read full abstract