Abstract
Sound source localization is one of the applicable areas in speech signal processing. The main challenge appears when the aim is a simultaneous multiple sound source localization from overlapped speech signals with an unknown number of speakers. Therefore, a method able to estimate the number of speakers, along with the speaker’s location, and with high accuracy is required in real-time conditions. The spatial aliasing is an undesirable effect of the use of microphone arrays, which decreases the accuracy of localization algorithms in noisy and reverberant conditions. In this article, a cuboids nested microphone array (CuNMA) is first proposed for eliminating the spatial aliasing. The CuNMA is designed to receive the speech signal of all speakers in different directions. In addition, the inter-microphone distance is adjusted for considering enough microphone pairs for each subarray, which prepares appropriate information for 3D sound source localization. Subsequently, a speech spectral estimation method is considered for evaluating the speech spectrum components. The suitable spectrum components are selected and the undesirable components are denied in the localization process. The speech information is different in frequency bands. Therefore, the adaptive wavelet transform is used for subband processing in the proposed algorithm. The generalized eigenvalue decomposition (GEVD) method is implemented in sub-bands on all nested microphone pairs, and the probability density function (PDF) is calculated for estimating the direction of arrival (DOA) in different sub-bands and continuing frames. The proper PDFs are selected by thresholding on the standard deviation (SD) of the estimated DOAs and the rest are eliminated. This process is repeated on time frames to extract the best DOAs. Finally, K-means clustering and silhouette criteria are considered for DOAs classification in order to estimate the number of clusters (speakers) and the related DOAs. All DOAs in each cluster are intersected for estimating the position of the 3D speakers. The closest point to all DOA planes is selected as a speaker position. The proposed method is compared with a hierarchical grid (HiGRID), perpendicular cross-spectra fusion (PCSF), time-frequency wise spatial spectrum clustering (TF-wise SSC), and spectral source model-deep neural network (SSM-DNN) algorithms based on the accuracy and computational complexity of real and simulated data in noisy and reverberant conditions. The results show the superiority of the proposed method in comparison with other previous works.
Highlights
Sound source localization (SSL) is one of the important areas in speech processing applications.The main challenge is multiple simultaneous SSL in noisy and reverberant conditions
Almost 8% of overlapped speech is from 3 simultaneous speakers and overlapped speech is from two simultaneous speakers
First, a cuboids nested microphone array is proposed which eliminates the spatial aliasing by having proper inter-microphone distances in all microphone pairs and prepares the high quality signals for the SSL algorithm
Summary
Sound source localization (SSL) is one of the important areas in speech processing applications. The main challenges in all localization methods are summarized as follows: (1) high computational complexity, (2) pre-information of speech signal especially the number of speakers, and (3) low accuracy in the case of multiple simultaneous sound sources in noisy and reverberant conditions. Multiple signal classification (MUSIC) [26] and estimating signal parameters via the rotational invariance technique (ESPRIT) [27] are algorithms designed to prepare higher resolution in comparison with parametric methods. These methods have been designed based on uniform linear arrays and narrow band signals. There has been some development of these techniques for circular microphone array [28] and wideband signals [29]
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have