Abstract

A system that is capable of both recognizing and synthesising emotional content in speech is developed. First, the information that relates the physical features of emotional speech to the emotional content perceived by the listeners is estimated through linear statistical methods and it is applied to the system. The system realises emotion recognition and synthesis by means of a simple linear operation using the relation information. In the system, the pitch contour is expressed by the seven-parameter model proposed by Hirose, Fujisaki & Yamaguchi (1984), the power envelope is approximated by five line segments (11 parameters), and PSOLA (Pitch-Synchronous OverLAp) is applied to synthesise the speech. A set of emotional words, among which there is very little correlation, was selected from the preliminary statistical experiments. The relation information was verified as being significant and, from the results of the experiments, the system was able to recognise and synthesise emotional content in speech just as human subjects did.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.