Abstract

Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA.

Highlights

  • Brain-computer interfaces (BCIs)—systems that can operate external devices using only brain signals—have been actively studied in recent years (Wolpaw and Wolpaw, 2012), and are expected to provide a method of communication and interaction for people with severe motor disabilities

  • We confirmed that virtual sounds generated using out-of-head sound localization that was tuned for each participant was well localized for each direction

  • Our results showed that an stimulus onset asynchrony (SOA) of 200 ms resulted in low performance on average, this does not mean that a 200-ms SOA is inappropriate for use in auditory BCIs using virtual sounds

Read more

Summary

Introduction

Brain-computer interfaces (BCIs)—systems that can operate external devices using only brain signals—have been actively studied in recent years (Wolpaw and Wolpaw, 2012), and are expected to provide a method of communication and interaction for people with severe motor disabilities. Auditory BCIs using spatial information such as sound-source direction have been studied and are considered intuitive and easy to use (Schreuder et al, 2010, 2011; Gao et al, 2011; Käthner et al, 2013; Nambu et al, 2013; Simon et al, 2014). In a previous study (Nambu et al, 2013) we used a system of auditory stimuli from different directions that was generated by out-of-head sound localization technology and presented as virtual sound over earphones (Shimada and Hayashi, 1995). The virtual sounds were produced using individual head-related transfer functions (HRTFs) Because this system can generate spatial sound accurately without having to place loudspeakers, it is considered a viable option for use in a compact and portable BCI system

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call