Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks. In the same vein, research on rhythm processing in music supports the notion that perceptual timing precision varies systematically with the sharpness of acoustic onset edges, conceptualized in the beat bin hypothesis. Increased envelope sharpness induces increased precision in localizing a sound in time. Despite this tight relationship between envelope shape and temporal processing, it is currently unknown how the brain uses predictive information about envelope features to optimize temporal perception. With the current EEG study, we show that the predicted sharpness of the amplitude envelope is encoded by pre-target neural activity in the beta band (15–25 Hz), and has an impact on the temporal perception of target sounds. We used probabilistic sound cues in a timing judgment task to inform participants about the sharpness of the amplitude envelope of an upcoming target sound embedded in a beat sequence. The predictive information about the envelope shape modulated task performance and pre-target beta power. Interestingly, these conditional beta-power modulations correlated positively with behavioral performance in the timing judgment task and with perceptual temporal precision in a click-alignment task. This study provides new insight into the neural processes underlying prediction of the sharpness of the amplitude envelope during beat perception, which modulate the temporal perception of sounds. This finding could reflect a process that is involved in temporal prediction, exerting top-down control on neural entrainment via the prediction of acoustic edges in the auditory stream.
Read full abstract