Abstract

Numerous examinations are performed related to automatic emotion recognition and speech detection in the Laboratory of Speech Acoustics. This article reviews results achieved for automatic emotion recognition experiments on spontaneous speech databases on the base of the acoustical information only. Different acoustic parameters were compared for the acoustical preprocessing, and Support Vector Machines were used for the classification. In spontaneous speech, before the automatic emotion recognition, speech detection and speech segmentation are needed to segment the audio material into the unit of recognition. At present, phrase was selected as a unit of segmentation. A special method was developed on the base of Hidden Markov Models, which can process the speech detection and automatic phrase segmentation simultaneously. The developed method was tested in a noisy spontaneous telephone speech database. The emotional classification was prepared on the detected and segmented speech.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call