Abstract

AbstractThis paper briefly introduces the Project “AudEeKA”, whose aim is to use speech and other bio signals for emotion recognition to improve remote, but also direct, healthcare. This article takes a look at use cases, goals and challenges, of researching and implementing a possible solution. To gain additional insights, the main-goal of the project is divided into multiple sub-goals, namely speech emotion recognition, stress detection and classification and emotion detection from physiological signals. Also, similar projects are considered and project-specific requirements stemming from use-cases introduced. Possible pitfalls and difficulties are outlined, which are mostly associated with datasets. They also emerge out of the requirements, their accompanying restrictions and first analyses in the area of speech emotion recognition, which are shortly presented and discussed. At the same time, first approaches to solutions for every sub-goal, which include the use of continual learning, and finally a draft of the planned architecture for the envisioned system, is presented. This draft presents a possible solution for combining all sub-goals, while reaching the main goal of a multimodal emotion recognition system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call