Blind source separation algorithms try to reconstruct original signals, e.g., multiple speakers, from knowledge of their superpositions, using solely the mutual statistical independence of the source signals as criterion for separation. However, application of existing algorithms to acoustic superpositions is limited by the complex nature of room transfer functions and by the use of nonlinear computations. Expanding on our previous work, we linearize the acoustic source separation problem by moving to the frequency domain [Anemüller and Gramss, DAGA (1998)] and eliminate the need for computation of nonlinear functions by using a multiple decorrelation approach. Thus, our algorithm exploits the highly redundant structure of, e.g., speech signals in order to reduce the computational cost. Results of separation experiments are presented.
Read full abstract