Abstract

Recent developments in computer vision and the emergence of wearable sensors have opened opportunities for the development of advanced and sophisticated techniques to enable multi-modal user assessment and personalized training which is important in educational, industrial training and rehabilitation applications. They have also paved way for the use of assistive robots to accurately assess human cognitive and physical skills. Assessment and training cannot be generalized as the requirement varies for every person and for every application. The ability of the system to adapt to the individual's needs and performance is essential for its effectiveness. In this paper, the focus is on task performance prediction which is an important parameter to consider for personalization. Several research works focus on how to predict task performance based on physiological and behavioral data. In this work, we follow a multi-modal approach where the system collects information from different modalities to predict performance based on (a) User's emotional state recognized from facial expressions(Behavioral data), (b) User's emotional state from body postures(Behavioral data) (c) task performance from EEG signals (Physiological data) while the person performs a robot-based cognitive task. This multi-modal approach of combining physiological data and behavioral data produces the highest accuracy of 87.5 percent, which outperforms the accuracy of prediction extracted from any single modality. In particular, this approach is useful in finding associations between facial expressions, body postures and brain signals while a person performs a cognitive task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.