The perception of two (or more) simultaneous musical notes, depending on their pitch interval(s), could be broadly categorized as consonant or dissonant. Previous literature has suggested that musicians and non-musicians adopt different strategies when discerning music intervals: while musicians rely on the frequency ratios between the two fundamental frequencies, such as “perfect fifth” (3:2) as consonant and “tritone” (45:32) as dissonant intervals; non-musicians may rely on the presence of ‘roughness’ or ‘beats’, generated by the difference of fundamental frequencies, as the key elements of ‘dissonance’. The separate Event-Related Potential (ERP) differences in N1 and P2 along the midline electrodes provided evidence congruent with such ‘separate reliances’. To replicate and to extend, in this study we reran the previous experiment, and separately collected fMRI data of the same protocol (with sparse sampling modifications). The behavioral and EEG results largely corresponded to our previous finding. The fMRI results, with the joint analyses by univariate, psycho-physiological interaction, and representational similarity analysis (RSA) approaches, further reinforce the involvement of central midline-related brain regions, such as ventromedial prefrontal and dorsal anterior cingulate cortex, in consonant/dissonance judgments. The final spatiotemporal searchlight RSA provided convincing evidence that the medial prefrontal cortex, along with the bilateral superior temporal cortex, is the joint locus of midline N1 and dorsal anterior cingulate cortex for the P2 effect (for musicians). Together, these analyses reaffirm that musicians rely more on experience-driven knowledge for consonance/dissonance perception; but also demonstrate the advantages of multiple analyses in constraining the findings from both EEG and fMRI.
Read full abstract