Abstract

Functional near-infrared spectroscopy (fNIRS) can measure neural activity through blood oxygenation changes in the brain in a wearable form factor, enabling unique applications for research in and outside the lab and in practical occupational settings. fNIRS has proven capable of measuring cognitive states such as mental workload, often using machine learning (ML) based brain-computer interfaces (BCIs). To date, this research has largely relied on probes with channel counts from under ten to several hundred, although recently a new class of wearable NIRS devices featuring thousands of channels has emerged. This poses unique challenges for ML classification, as fNIRS is typically limited by few training trials, which results in severely under-determined estimation problems. So far, it is not well understood how such high-resolution data is best leveraged in practical BCIs and whether state-of-the-art or better performance can be achieved. To address these questions, we propose an ML strategy to classify working-memory load that relies on spatio-temporal regularization and transfer learning from other subjects in a combination that, to our knowledge, has not been used in previous fNIRS BCIs. The approach can be interpreted as an end-to-end generalized linear model and allows for a high degree of interpretability using channel-level or cortical imaging approaches. We show that using the proposed methodology, it is possible to achieve state-of-the-art decoding performance with high-resolution fNIRS data. We also replicated several state-of-the-art approaches on our dataset of 43 participants wearing a 3198 dual-channel NIRS device while performing the n-Back task and show that these existing methodologies struggle in the high-channel regime and are largely outperformed by the proposed pipeline. Our approach helps establish high-channel NIRS devices as a viable platform for state-of-the-art BCI and opens new applications using this class of headset while also enabling high-resolution model imaging and interpretation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.