55 “Visual music” is a term used to refer to a broad range of artistic practices, far-flung temporally and geographically yet united by a common idea: that visual art can aspire to the dynamic and nonobjective qualities of music (Mattis 2005). From paintings to films—and now to computer programs—the manifestations of visual music have evolved along with the technology available to artists. Today’s interactive, computer-based tools offer a variety of possibilities for relating the worlds of sound and image; as such, they demand new conceptual approaches as well as a new level of technical competence on the part of the artist. Jitter, a software package first made available in 2002 by Cycling ’74, enables the manipulation of multidimensional data in the context of the Max programming environment. An image can be conveniently represented by a multidimensional data matrix, and indeed Jitter has seen widespread adoption as a format for manipulating video, both in nonreal-time production and improvisational contexts. However, the general nature of the Jitter architecture is well-suited to specifying interrelationships among different types of media data including audio, particle systems, and the geometrical representations of three-dimensional scenes. This article is intended to serve as a starting point and tutorial for the computer musician interested in exploring the world of visual music with Jitter. To understand what follows, no prior experience with Jitter is necessary, but we do assume a familiarity with the Max/MSP environment. We begin by briefly discussing strategies for the mapping of sound to image; influences here include culturally learned and physiologically inherent cross-modal associations, different domains of association, and musical style. We then introduce Jitter, the format of image matrices, and the software’s capabilities for drawing hardware-accelerated graphics using the OpenGL standard. This is followed by a survey of techniques for acquiring event and signal data from musical processes. Finally, a thorough treatment of Jitter’s variable frame-rate architecture and the Max/MSP/Jitter threading implementation is presented, because a good understanding of these mechanisms is critical when designing a visualization and/or sonification network.