Abstract

This paper proposes a scheme for analysing speech data inspired by the concept of working memory, it is uses wavelet analysis and unsupervised learning models. The scheme relies on splitting a sound stream in arbitrary chunks and producing feature streams by sequentially analysing each chunk with time-frequency methods. The purpose of this is to precisely detect the time of transitions as well as length of stable acoustic units that occur between them. The procedure uses two feature extraction stages to analyse the audio chunk and two types of unsupervised machine learning models, hierarchical clustering and Self-Organising Maps. The first pass looks at the whole chunk piece by piece looking for speech and silence parts, this stage takes the root mean square, the arithmetic mean, standard deviation from the samples of each piece and classifies the features using hierarchical clustering into speech and non-speech clusters. The second pass looks for stable patterns and transitions at the locations inferred from the results of the first pass, this step uses Harmonic and Daubechies wavelets for coefficient extraction. After the analysis procedures have been completed the chunk advances 2 seconds, the transient and stable feature vectors are saved within SOMs and a new cycle begins on a new chunk.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.