Abstract

In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolution-based technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25 milliseconds for 67% and 76.6% of the syllable segments, respectively.

Highlights

  • One of the major reasons for considering the syllable as a basic unit for automatic speech recognition (ASR) systems is its better representational and durational stability relative to the phoneme [1]

  • The analysis shows that when the window scale factor (WSF) is varied from 4 to 10, the number of syllable boundaries detected is equal to the number of actual boundaries

  • Each file is of 45 seconds duration. These files are manually segmented into syllabic units and used as a reference to verify the performance of our segmentation approach

Read more

Summary

Introduction

One of the major reasons for considering the syllable as a basic unit for automatic speech recognition (ASR) systems is its better representational and durational stability relative to the phoneme [1]. Researchers have tried different ways of segmenting the speech signal either at the phoneme level or at the syllable level, with or without the use of phonetic transcription. These segmentation methods can further be classified into two categories, namely, time-domain-based methods, where short-term energy (STE) function, zero-crossing rate, and so forth are used, and frequency-domain-based methods, where short-term spectral features are used

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call