BackgroundExperience changes visuo-cortical tuning. In humans, re-tuning has been studied during aversive generalization learning, in which the similarity of generalization stimuli (GSs) with a conditioned threat cue (CS+) is used to quantify tuning functions. Previous work utilized pre-defined tuning shapes (generalization and sharpening patterns). This approach may constrain the ways in which re-tuning can be characterized since the tuning patterns may not match the prototypical functions. New methodThe present study proposes a flexible and data-driven method for precisely quantifying changes in tuning based on the Ricker wavelet function and the Bayesian bootstrap. This method was applied to EEG and psychophysics data from an aversive generalization learning paradigm. ResultsThe Ricker wavelet model fitted the steady-state visual event potentials (ssVEP), alpha-band power, and detection accuracy data well. A Morlet wavelet function was used for comparison and fit the data better in some situations, but was more challenging to interpret. The pattern of re-tuning in the EEG data, predicted by the Ricker model, resembled the shapes of the best fitting a-priori patterns. Comparison with existing methodsAlthough the re-tuning shape modeled by the Ricker function resembled the pre-defined shapes, the Ricker approach led to greater Bayes factors and more interpretable results compared to a-priori models. The Ricker approach was more easily fit and led to more interpretable results than a Morlet wavelet model. ConclusionThis work highlights the promise of the current method for capturing the precise nature of visuo-cortical tuning, unconstrained by the implementation of a-priori models.
Read full abstract