Abstract

Decision confidence is an individual's feeling of correctness or optimization when making a decision. Various physiological signals, including electroencephalography (EEG) and eye movements have been studied extensively in measuring levels of decision confidence in humans. While multimodal fusion generally performs better than single-modal approaches, it requires data from different modalities at a greater cost. In particular, collection of EEG data is more complicated and time consuming while eye movement signals are much easier to acquire. To tackle this problem, we propose a cross-modal method based on generative adversarial learning. In our method, the intrinsic relationship between eye movement and EEG features in a high-level feature space can be learned in the training phase, and then we can obtain multimodal information during the test phase when only eye movements are available as inputs. Experimental results on the SEED-VPDC dataset demonstrate that our proposed method outperforms single-modal methods trained and tested only on eye movement signals with an improvement of approximately 5.43% in accuracy, and maintains competitive performance in comparison with multimodal methods. Our cross-modal approach requires only eye movements as inputs and reduces reliance on EEG data, making the decision confidence measurement more applicable and practicable.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call