Abstract

This paper presents a wearable brain–computer interface relying on neurofeedback in extended reality for the enhancement of motor imagery training. Visual and vibrotactile feedback modalities were evaluated when presented either singularly or simultaneously. Only three acquisition channels and state-of-the-art vibrotactile chest-based feedback were employed. Experimental validation was carried out with eight subjects participating in two or three sessions on different days, with 360 trials per subject per session. Neurofeedback led to statistically significant improvement in performance over the two/three sessions, thus demonstrating for the first time functionality of a motor imagery-based instrument even by using an utmost wearable electroencephalograph and a commercial gaming vibrotactile suit. In the best cases, classification accuracy exceeded 80% with more than 20% improvement with respect to the initial performance. No feedback modality was generally preferable across the cohort study, but it is concluded that the best feedback modality may be subject-dependent.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.