In recent years, significant advancements have been made in neural beamforming, leveraging spectral and spatial cues to enhance their performance in multi-channel speech enhancement. When the frame-wise processing mechanism is required, there exists a trade-off between performance and algorithmic delay for existing all-neural beamformers. However, from the perspective of multi-source information fusion, the network is often encapsulated into a black box to entangle and fuse the spatial and spectral features into a non-linear feature space, which hinders our understanding of how they work collaboratively for target speech extraction. In this regard, this paper proposes to decouple the spatial and spectral domain processing inspired by Taylor’s approximation theory. Specifically, we reformulate the time-variant beamforming defined in the spatial domain into the adaptive weighting and mixing of different beam components in the beamspace domain. This reformulation enables us to model the recovery of target speech as a weighted sum operation in the beamspace domain, where each beam component is associated with an introduced unknown term for residual interference cancellation. By virtue of Taylor’s series expansion, the recovery process can be decomposed into the superimposition of the 0th-order non-derivative and high-order derivative terms, where the former acts as spatial filtering in the spatial domain, and the latter serves as a residual interference canceller in the spectral domain. We conduct extensive experiments on the spatialized LibriSpeech and L3DAS Challenge datasets. Experimental results show that, compared with existing advanced approaches, the proposed method not only achieves competitive performance in terms of multiple objective metrics but also provides feasible guidance in multi-channel speech enhancement pipeline design.
Read full abstract