Abstract

A Brain Computer Interface (BCI) character speller allows human-beings to directly spell characters using eye-gazes, thereby building communication between the human brain and a computer. Current popular BCI character speller systems employ a large number of sensors, which prevents the utilization of such systems in human's daily life. Using sensor selection methods to select appropriate sensor subsets from an initial large sensor set can reduce the number of sensors needed to acquire brain signals without losing the character spelling accuracy, thereby promoting the BCI character spellers into people's daily life. However, current sensor selection methods cannot select an appropriate sensor subset such that they can further reduce the number of sensors needed to acquire brain signals without losing the spelling accuracy. To address this issue, we propose a novel sensor selection method based on a specific Convolutional Neural Network (CNN) we have devised. Our method uses a parametric backward elimination algorithm which uses our devised CNN as a ranking function to evaluate sensors and eliminate less important sensors. We perform experiments on three benchmark datasets and compare the minimal number of sensors selected by our proposed method and other selection methods to acquire brain signals while keeping the spelling accuracy the same as the accuracy achieved when the initial large sensor set is used. The results show that the minimal number of sensors selected by our method is lower than the minimal number of sensors selected by other methods in most cases. Compared with the minimal number of sensors selected by other methods, our method can reduce this number with up to 44 sensors.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.