Abstract

In this paper we present the first results of a pilot experiment in the interpretation of multimodal observations of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. Domains of application for such cognitive model based systems are, for instance, healthy autonomous ageing or automated training systems. Abilities to observe cognitive abilities and emotional reactions can allow artificial systems to provide appropriate assistance in such contexts. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant’s awareness of the current situation and to predict ability to respond effectively to challenging situations. Feature selection has been performed to construct a multimodal classifier relying on the most relevant features from each modality. Initial results indicate that eye-gaze, body posture and emotion are good features to capture such awareness. This experiment also validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving.

Highlights

  • Available sensing technologies are increasingly able to capture and interpret human displays of emotion and awareness through non-verbal channels

  • We chose to analyze a classification problem that can be interpreted by a human: Is it possible, by the use of gaze, body and/or facial emotion features, to detect if a chess player is an expert or not? This problem is used as example to obtain a first validation of our data relevancy

  • This study does not intend to demonstrate that human engage in problem solving would express always the same basic emotion but would rather show a variation in facial action unit (AU) activations

Read more

Summary

Introduction

Available sensing technologies are increasingly able to capture and interpret human displays of emotion and awareness through non-verbal channels. Such sensing technologies tend to be sensitive to environmental conditions (e.g., noise, light exposure or occlusion), producing intermittent and unreliable information. An ability to model the cognitive abilities of elderly subjects can permit an artificial systems to provide assistance that is appropriate but not excessive. Such an ability can be used to provide appropriate emotion and cognitive stimulation replacing gradual declines in natural cognitive and motor abilities. The abilities to model mental state and emotional reaction can be used in on-line training systems to pose challenges that

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.