The spectrum of a musical signal contains features that are important in our perception of musical sounds. We know that the timbre or tone quality depends, in part, on the spectrum of the tone (Grey:1978). Also, the location of prominent frequency components in the spectrum have been shown to relate to the perceived pitch of the tone as well (Piszczalski and Galler: 1978). Unfortunately, a single spectrum provides no information on the time-varying aspects of sound. Consequently, methods have been devised to capture and display the time-varying spectrum of sounds. Perhaps the best known display of amplitude -frequencytime information is in spectrograms, more commonly known as voiceprints. The speech community has long used the analog spectrograph to generate these images, and musical examples were studied using this technique as early as 1947 (Potter et al.). More recently digital methods have been employed to capture the spectrographic image. Both digital filters and the Fast Fourier Transform (FFT) can give spectrographic information, with the FFT being the method used in the figures presented here. For more information on the actual computer implementation cf. (Piszczalski and Galler: 1978.) For spectrographic analysis, the digital computer has advantages over the analog spectrograph by providing more sophisticated graphics displays, including a three-dimensional representation, which we refer to as the surface of the sound. Also, digitized data is more amenable to further analysis and processing. Spectrographic displays in general offer a unique graphic perspective for music acoustics research; in particular, in studying sounds produced on traditional musical instruments. We can study how the harmonic envelope changes during the course of a single sustained tone. We can also view how the spectrum changes between played notes. We can also watch the effect of articulation, such as legato-tonguing, on the resulting spectrum. Spectrograms also reduce the danger of incorrectly assuming that an arbitrary spectrum is representative for all tones produced on that instrument. The variety of shapes the harmonic envelope may take can be especially dramatic when a sequence of notes is simultaneously displayed, as is often the case in the following figures. Depending on what perceptual features are sought, different frequency and time scales should be used. We wanted the melodic patterns to be as visually obvious as possible, so we optimized towards identifying the note sequences. In the following spectral surfaces a new spectrum was calculated for every 32 msec of music. We have found this time interval to be sufficiently dense to capture the most rapidly played note sequences that we have studied to date (up to 14 notes/sec). The frequency scale is subdivided into 128 equally spaced points between 0 Hz and the maximum frequency indicated on the respective graphs. The hidden-line, three-dimensional algorithm used for our graphics was implemented on a Hewlett-Packard minicomputer system by Frederick Looft. The frequency, amplitude and time scales are linear in all cases. These images were generated in the course of our larger study of musical psychoacoustics, and more specifically have served as an aid in the areas of automatic pitch-tracking and melodic pattern recognition of performed music. While we have found the spectral representations quite helpful, many of these graphs have generated fascinating questions which we simply have not had time yet to explore to any depth or to evaluate using other techniques. Hearing the source of the sounds while viewing the figures makes them particularly effective. Still, this collection of sound images, by themselves, should be informative for those in the music analysis/synthesis area. Except where otherwise noted, audio sources were recorded at the University of Michigan. This research is supported by the National Science Foundation (Grant No. MC578-09052).