Abstract
Speech understanding requires rapid of categorizations of auditory inputs. Neuronal rhythms play key roles in these computations; however, the principles by which timing, acoustics, and context are combined on-line to accomplish speech perception remain murky. Here, we outline a model of how temporal constraints sculpt meaning-making in speech processing according to an adaptively quasi-rhythmic process driven by fluctuations in certainty and predictability of timing and content. We propose that predictive mechanisms are leveraged to reduce possible identities of inputs based on prior inputs in which the smallest chunks of input can be rapidly compressed, recoded, and passed to higher representational linguistic levels. Evidence accumulation at each level of representation is used to assess likelihoods of candidate interpretations and their reliabilities at different timescales in a manner pegged to speech rate and situation-specific speed-accuracy tradeoffs. This synthesis illuminates mechanisms of human speech processing while making predictions for neuronal implementations and behavioral psychophysics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.