Abstract

Background. Rapid serial visual presentation (RSVP) based brain-computer interface (BCI) is widely used to categorize the target and non-target images. The available information limits the prediction accuracy of single-trial using single-subject electroencephalography (EEG) signals.New Method. Hyperscanning is a new manner to record two or more subjects’ signals simultaneously. So we designed a multi-level information fusion model for target image detection based on dual-subject RSVP, namely HyperscanNet. The two modules of this model fuse the data and features of the two subjects at the data and feature layers. A chunked long and short-term memory artificial neural network (LSTM) was used in the time dimension to extract features at different periods separately, completing fine-grained underlying feature extraction. While the feature layer is fused, some plain operations are used to complete the fusion of the data layer to ensure that important information is not missed.Results. Experimental results show that the F1-score (the harmonic mean of precision and recall) of this method with best group of channels and segment length is 82.76%.Comparison with existing methods. This method improves the F1-score by at least 5% compared to single-subject target detection.Conclusions. Target detection can be accomplished by the two subjects’ collaboration to achieve a higher and more stable F1-score than a single subject.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call