AbstractData-driven strategies have been widely used to distinguish experimental effects on single-trial EEG signals. However, how latency variability, such as within-condition jitter or latency shifts between conditions, affects the performance of EEG classifiers has not been well investigated. Without explicitly considering and disentangling such attributes of single trials, neural network-based classifiers have limitations in measuring their contributions. Inspired by domain knowledge of subcomponent latency and amplitude from traditional cognitive neuroscience, this study applies a stepwise latency correction method on single trials to control for their contributions to classifier behavior. As a case study demonstrating the value of this method, we measure repetition priming effects of faces, which induce large reaction time differences, latency shifts, and amplitude effects in averaged event-related potentials. The results show that within-condition jitter negatively impacts classifier performance, but between-condition latency shifts improve accuracy, whereas genuine amplitude differences have no significant influence. While demonstrated in the case of priming effects, this methodology can be generalized to experiments involving many kinds of time-varying signals to account for the contributions of latency variability to classifier performance.
Read full abstract