We examined two models that quantified the effects of tonality on accuracy and reaction time in an intervening-tone pitch-comparison task. In each of 16 task conditions (standard tone-interpolated sequence-test tone, abbreviated as S-seq-T), the S and T tones, C₄ and/or C#₄, were separated by a three-tone sequence that was a random arrangement of one of the four triads, ${\rm{C}}_{{\rm{4Major}}} ,{\rm{C}}_{{\rm{4Minor}}} ,{\rm{C\# }}_{\rm{4}} _{{\rm{Major}}} $ or ${\rm{C\# }}_{{\rm{4Minor}}} $ . Both models were based on the tonal hierarchy (Krumhansl, 1990a; Krumhansl & Shepard, 1979) and the key-finding algorithm (Krumhansl & Schmuckler, cited in Krumhansl, 1990a); the key- finding algorithm was used to determine the best-fitting key for the first four notes of the condition (i.e., the S-seq combination). Model 1 (S-Tone Stability) determined the stability of the S tone given that key. Model 2 (T-Tone Expectancy) determined the expectancy for the T tone given that key. Over the 16 conditions, for three groups of 12 subjects, differing by level of training, mean proportion correct discrimination ranged from .53 to .95 and increased significantly across levels of musical experience. For the musically trained subjects, both models predicted performance well but neither model was dramatically more effective than the other; the combination of both models did produce an increase in predictability. For untrained subjects, tonality, as assessed by the key-finding algorithm in either model, was not significantly correlated with performance.
Read full abstract