Abstract
Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) that can deliver a high information transfer rate (ITR) usually require subject's calibration data to learn the class- and subject-specific model parameters (e.g. the spatial filters and SSVEP templates). Normally, the amount of the calibration data for learning is proportional to the number of classes (or visual stimuli), which could be huge and consequently lead to a time-consuming calibration. This study presents a transfer learning scheme to substantially reduce the calibration effort. Inspired by the parameter-based and instance-based transfer learning techniques, we propose a subject transfer based canonical correlation analysis (stCCA) method which utilizes the knowledge within subject and between subjects, thus requiring few calibration data from a new subject. The evaluation study on two SSVEP datasets (from Tsinghua and UCSD) shows that the stCCA method performs well with only a small amount of calibration data, providing an ITR at 198.18±59.12 (bits/min) with 9 calibration trials in the Tsinghua dataset and 111.04±57.24 (bits/min) with 3 trials in the UCSD dataset. Such performances are comparable to those from using the multi-stimulus CCA (msCCA) and the ensemble task-related component analysis (eTRCA) methods with the minimally required calibration data (i.e., at least 40 trials in the Tsinghua dataset and at least 12 trials in the UCSD dataset), respectively. Inter- and intra-subject transfer helps the recognition method achieve high ITR with extremely little calibration effort. The proposed approach saves much calibration effort without sacrificing the ITR, which would be significant for practical SSVEP-based BCIs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Systems and Rehabilitation Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.