This study focuses on the perceptual nature of chest and falsetto registers, and on the degree of correspondence between perception and several acoustic measures. Fifteen target notes, ranging from G♯3 to A♯4, were sung by a male and a female subject in the context of ascending and descending sequences on the vowels /i/ and /a/. Register transitions were elicited by setting strict constraints on production, and by minimizing auditory feedback. Segments of 1‐s duration were extracted from the target notes, digitized, acoustically analyzed, and perceptually judged by ten trained listeners. Multidimensional scaling and hierarchical clustering analyses were utilized to capture the dimensionality and the internal structure of perceptual data sets derived from pairwise similarity ratings. Similar analyses of the acoustic data sets provided a means for evaluating the congruence of each acoustic variable and perception. Optimal spatial representation of the perceptual data required no more than two orthogonal dimensions, with the quality attribute represented by the dominant dimension. The acoustic variables which exhibited the highest degree of isomorphism with perceived registers were characterized by differences in frequency spectra obtained from a set of 1‐oct bandpass filters, and by F0/harmonic ratio. The representation of pitch differences was reflected only in the internal ordering of the stimuli within register, but did not affect the perceptual discontinuity between registers.