Abstract

We report on experiments in which musically relevant harmonic and inharmonic sounds have been fed into computer-based ear models (or into modules which at least simulate parts of the peripheral auditory system) working either in the frequency or in the time domain. For a major chord in just intonation, all algorithms produced reliable and interpretable output, which explains mechanisms of pitch perception. One model also yields data suited to demonstrate how sensory consonance and ’fusion’ are contained in the ACF of the neural activity pattern.With musical sounds from instruments (carillon, gamelan) which represent different degrees of inharmonicity, the performance of the modules reflects difficulties in finding correct spectral and/or virtual pitch(es) known also from behavioral experiments. Our measurements corroborate findings from neurophysiology according to which much of the neural processing relevant for perception of pitch and consonance is achieved subcortically.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.