Abstract

In a real world scenario, the automatic classification of audio signals constitutes a difficult problem. Often, reverberation and interfering sounds reduce the quality of a target source signal. This results in a mismatch between test and training data when a classifier is trained on clean and anechoic data. To classify disturbed signals more accurately we make use of the spatial distribution of microphones from ad hoc microphone arrays. In the proposed algorithm clusters of microphones that either are dominated by one of the sources in an acoustic scenario or contain mainly signal mixtures and reverberation are estimated in the audio feature domain. Information is shared within and in between these clusters to create one feature vector for each cluster to classify the source dominating this cluster. We evaluate the algorithm using simultaneously active sound sources and different ad hoc microphone arrays in simulated reverberant scenarios and multichannel recordings of an ad hoc microphone setup in a real environment. The cluster based classification accuracy is higher than the accuracy based on single microphone signals and allows for a robust classification of simultaneously active sources in reverberant environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.