Undergraduate music majors (N = 27) identified simple musical intervals (m2 through M7), presented at 10 different pitch levels and in three different presentation modes (ascending, descending, harmonic). Resulting error matrices were analyzed by direct inspection, repeated measures ANOVA, and multidimensional scaling (MDS), Minor 6ths were the most difficult to identify; sizes larger intervals were systematically underestimated; and an interaction several factors including interval type and acoustical dissonance appeared to shape error rates. ANOVA found no effect for pitch level but a significant effect for presentation mode, with ascending intervals easiest and harmonic intervals the most difficult to identify. A three-dimensional MDS configuration was obtained, indicating an interaction interval size, interval type, and class acoustical dissonance. The classical interval classes pitch class set theory can be derived from a particular planar projection the configuration. The ability to identify intervals is a basic skill for music majors. College music programs devote substantial instruction time to developing and refining this skill. This suggests several research questions that could help teachers optimize classroom time spent on interval identification: Which intervals are most difficult to idenlify? With which other intervals is any given interval more likely to be confused? How is this confusion affected by presentation mode (melodic vs. harmonic) and/or pitch level at which intervals are played? Despite the obvious pedagogical importance the skill, classroom teachers still rely primarily on folk wisdom (e.g., of course, harmonic intervals are harder to identify ...) for their class preparations. While a number empirical studies have involved interval identification, almost all them have dealt with issues categorical perception (e.g., Burns & Ward, 1978) or thresholds for detecting mistiming (e.g., Vos, 1982). Very few studies have investigated the ways that interval types confuse with each other. In particular, the question whether interval identification ability is affected by the pitch level at which trials are played never has been considered systematically, even though perceived ioudness varies according to frequency (Fletchcr and Munson, 1933) and thus might possibly interfere with interval identification. Of the few existing studies, the earlier ones either provide no usable quantitative data (von Malt/ew, 1913; Ortmann, 1932), or else any useful data must be mined from within various tables (Jeffries, 1967). (Orlmann only provided qualitative error figures for two interval types, while von Malthew only studied interval recognition at the uppermost extremes human hearing-thus her results arc-not relevant to typical musical experience.) Two later studies (Killam, Lorton, and Schubert, 1975; Plomp, Wagenaar, and Mimpen. 1973) do give detailed matrices confusion data generated by somewhat different methodologies. Their results are somewhat similar, but both have problematic or limiting aspects. First, Plomp, Wagcnaar.and Mimpcn studied only harmonic intervals. Furthermore, their set stimuli could have skewed their results-the stimuli either had one tone fixed at middle C or the octave above, or were at frequencies centered around the middle that range so that not all interval types were presented an equal number times. Finally, their stimuli durations were completely unrealistic from a pedagogical perspective (four series durations at 15, 30, 60, and 120ms). In fairness, however, they were investigating different models for acoustical dissonance and were not interested in pedagogical implications their work. Ki llam, Lorton, and Schubert used stimuli reasonable pedagogical duration and studied melodic as well as harmonic intervals, but they only tested at two pitch levels. Thus basic work remains to be done in this area, and the present study attempts to address such need. …
Read full abstract