Abstract

The usual event-related potential (ERP) estimation is the average across epochs time-locked on stimuli of interest. These stimuli are repeated several times to improve the signal-to-noise ratio (SNR) and only one evoked potential is estimated inside the temporal window of interest. Consequently, the average estimation does not take into account other neural responses within the same epoch that are due to short inter stimuli intervals. These adjacent neural responses may overlap and distort the evoked potential of interest. This overlapping process is a significant issue for the eye fixation-related potential (EFRP) technique in which the epochs are time-locked on the ocular fixations. The inter fixation intervals are not experimentally controlled and can be shorter than the neural response's latency. To begin, the Tikhonov regularization, applied to the classical average estimation, was introduced to improve the SNR for a given number of trials. The generalized cross validation was chosen to obtain the optimal value of the ridge parameter. Then, to deal with the issue of overlapping, the general linear model (GLM), was used to extract all neural responses inside an epoch. Finally, the regularization was also applied to it. The models (the classical average and the GLM with and without regularization) were compared on both simulated data and real datasets from a visual scene exploration in co-registration with an eye-tracker, and from a P300 Speller experiment. The regularization was found to improve the estimation by average for a given number of trials. The GLM was more robust and efficient, its efficiency actually reinforced by the regularization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.