Computer Music Journal, 27:2, pp. 70–79, Summer 2003 2003 Massachusetts Institute of Technology. There are many different reasons why we might want to enter music notation into a computer, including editing and composing tasks, educational music theory exercises, sophisticated searching of music archives based on melodic fragments supplied as search criteria, or simply producing transposed arrangements. Musicologists may be interested in analyzing the style of a collection, and copyright enforcers may be interested in detecting legal infringements. At present, musical information is most often entered into the computer using the computer keyboard, a mouse, a piano keyboard (or some other electronic instrument) attached to the computer, or a scanning device for a sheet of printed music with offline recognition of printed symbols. Various optical musical recognition (OMR) techniques have been developed to convert scanned pages of music into a machine-readable format. Blostein and Baird (1992) presented a critical survey of problems and approaches to music image analysis. Since then, work in the OMR field has continued with researchers such as Bainbridge and Carter (1997) and Bainbridge and Wijaya (1999), who wrote a system to convert optically scanned pages of music into a machine-readable format. More recently, Droettboom and Fujinaga (2001) created an adaptive music-recognition system and interpretation mechanism. Fujinaga and Riley (2002) concentrated upon recommendations and options of file formats in the context of creating an archival image containing all relevant data extracted from a printed score. This makes interpretation of the music for archival storing, web delivery, printing, and other applications possible. Other researchers have focused upon novel process techniques, such as Ng (2002), who recognized that a stroke-based segmentation approach using mathematical morphology is necessary in OMR, applied after the image pre-processing (i.e., thresholding, de-skewing, and basic layout analysis). Similar image-processing techniques have been explored for handwritten music (Roach and Tatem 1988; Ng 2001). In particular, the work conducted by Luth (2002) focused on the recognition of handwritten music manuscripts. It is based on imageprocessing algorithms like edge detection, skeletonization, and run-length. In addition, general image-processing methods have been explored that are applicable to both printed and handwritten music (George forthcoming). With online music input, the prevailing interface technology for music editing involves the conventional ‘‘point-and-click’’ mouse-based paradigm. The ‘‘point-and-click’’ method typically requires various musical symbols to be selected from a menu and meticulously placed on a staff, and a constant movement between the menu and the staff is necessary. A parallel in the context of word processing would be to individually select the alphabetical characters and place them on the page to compose a sentence. In other fields that require some written notation (whether signatures or postcodes, mathematical notations, or cursive writing), online input has moved away from ‘‘point-and-click’’ approaches to penbased recognition. However, with few exceptions, there is a noticeable absence of research into the pen-based recognition of music symbols. One common paradigm is simply using the pen as a stylus to select music symbols from a menu bar. In another paradigm, the user must learn a special sequence of movements to enter a given music symbol. The most desirable system, however, would allow the user to write conventional music symbols.