Abstract
Emotive speech is a non-invasive and cost-effective biomarker in a wide spectrum of neurological disorders with computational systems built to automate the diagnosis. In order to explore the possibilities for the automation of a routine speech analysis in the presence of hard to learn pathology patterns, we propose a framework to assess the level of competence in paralinguistic communication. Initially, the assessment relies on a perceptual experiment completed by human listeners, and a model called the Aggregated Ear is proposed that draws a conclusion about the level of competence demonstrated by the patient. Then, the automation of the Aggregated Ear has been undertaken and resulted in a computational model that summarizes the portfolio of speech evidence on the patient. The summarizing system has a classical emotion recognition system as its central component. The code and the medical data are available from the corresponding author on request.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.