Abstract
Exponential increases of available computational resources, miniaturization, and sensors, are enabling the development of digital musical instruments that use non-conventional interaction paradigms and interfaces. This scenario opens up new opportunities and challenges in the creation of accessible instruments to include persons with disabilities into music practice. This work focuses in particular on instruments dedicated to people who can not use limbs, for whom the only means for musical expression are the voice and a small number of traditional instruments. First, a modular and adaptable conceptual framework is discussed for the design of accessible digital musical instruments targeted at performers with motor impairments. Physical interaction channels available from the neck upwards (head, mouth, eyes, brain) are analyzed in terms of potential and limitations for musical interaction. Second, a systematic survey of previously developed instruments is presented: each is analyzed in terms of design choices, physical interaction channels and related sensors, mapping strategies, performer interface and feedback. As a result of this survey, several open research directions are discussed, including the use of unconventional interaction channels, musical control mappings, multisensory feedback, design, evaluation, and adaptation.
Highlights
Music playing is one of the most universally accessible and inclusive human activities and is part of all known cultures [1]
Digital musical instruments (DMIs hereafter) [5] have the potential for augmented accessibility, as they allow for new, non-conventional modes of interaction
III) an exhaustive list of all the physical interaction channels that are available from the neck upwards: for each channel we review its uses for human-computer and musical interactions, and we discuss their potential and limitations with respect to a set of relevant channel properties
Summary
Music playing is one of the most universally accessible and inclusive human activities and is part of all known cultures [1]. The shape of the resonant cavity responsible for the modulation of the sound is mainly given by tongue movements, rather than by jaw posture, as well as by the presence of ‘‘lateral chambers’’ inside the mouth, during the emission of notes at high frequencies [59] This channel is much less explored than Voice, similar considerations may be made regarding available parameters (pitch, intensity), their related mapping strategies, and their estimation. The pressure between lower and upper clenching teeth, may be measured: an example is provided by ‘‘food simulators’’ [83], which use pressure sensors in between the dental arches Such studies suggest that this channel may be used for musical interactions, using Trigger, Toggle, Continuous range mapping strategies. In the absence of structured evaluation, the levels of required expertise plotted in Fig. 4 are estimated qualitatively based on our subjective judgement, and vary considerably depending on the employed parameters, mappings, and interfaces
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.