Abstract

For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6 and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.

Highlights

  • For the last decade, multimodal user interfaces have become omnipresent in the field of human-computer interaction and in commercially available devices (Turk, 2014)

  • We see that the fusion of multiple features from different signal types led to improvement in recognition accuracy significantly

  • In general functional Near Infrared Spectroscopy (fNIRS)-based features were outperformed by features based on the EEG signal

Read more

Summary

INTRODUCTION

Multimodal user interfaces have become omnipresent in the field of human-computer interaction and in commercially available devices (Turk, 2014). While there exist user studies, e.g., Heger et al (2011), which show that it is possible to improve human-computer interaction using this construct, many use cases—like the mentioned selection between auditory and visual output modalities—require a more fine grained model of mental workload, like the already mentioned multiple resource theory (Wickens, 2008). This paper contributes a number of substantial findings to the field of passive BCIs for HCI: We trained and evaluated classifiers which can either discriminate between predominantly visual and predominantly auditory perceptual activity or which were able to detect visual and auditory activity independently of each other The latter is ecologically important as many real-life tasks demand both visual and auditory resources. We look at the evaluation of the system on continuous data

MATERIALS AND METHODS
DATA ACQUISITION
GRAND AVERAGES
EEG features
RESULTS
DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call