Abstract

A multimodal sensory feedback was exploited in the present study to improve the detection of neurological phenomena associated with motor imagery. At this aim, visual and haptic feedback were simultaneously delivered to the user of a brain-computer interface. The motor imagery-based brain-computer interface was built by using a wearable and portable electroencephalograph with only eight dry electrodes, a haptic suit, and a purposely implemented virtual reality application. Preliminary experiments were carried out with six subjects participating in five sessions on different days. The subjects were randomly divided into “control group” and “neurofeedback group”. The former performed pure motor imagery without receiving any feedback, while the latter received multimodal feedback as a response to their imaginative act. Results of a cross validation showed that at most 61% of classification accuracy was achieved in performing the pure motor imagination. On the contrary, subjects of the “neurofeedback group” achieved up to 82% mean accuracy, with a peak of 91% in one of the sessions. However, no improvement in pure motor imagery was observed, either when practicing with pure motor imagery or with feedback.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call