Abstract

Computing technologies have opened up a myriad of possibilities for expanding the sonic capabilities of acoustic musical instruments. Musicians nowadays employ a variety of rather inexpensive, wireless sensor-based systems to obtain refined control of interactive musical performances in actual musical situations like live music concerts. It is essential though to clearly understand the capabilities and limitations of such acquisition systems and their potential influence on high-level control of musical processes. In this study, we evaluate one such system composed of an inertial sensor (MetaMotionR) and a hexaphonic nylon guitar for capturing strumming gestures. To characterize this system, we compared it with a high-end commercial motion capture system (Qualisys) typically used in the controlled environments of research laboratories, in two complementary tasks: comparisons of rotational and translational data. For the rotations, we were able to compare our results with those that are found in the literature, obtaining RMSE below 10° for 88% of the curves. The translations were compared in two ways: by double derivation of positional data from the mocap and by double integration of IMU acceleration data. For the task of estimating displacements from acceleration data, we developed a compensative-integration method to deal with the oscillatory character of the strumming, whose approximative results are very dependent on the type of gestures and segmentation; a value of 0.77 was obtained for the average of the normalized covariance coefficients of the displacement magnitudes. Although not in the ideal range, these results point to a clearly acceptable trade-off between the flexibility, portability and low cost of the proposed system when compared to the limited use and cost of the high-end motion capture standard in interactive music setups.

Highlights

  • Musical performances with acoustic instruments and voice are often associated with electronic-digital equipment, such as an amplification system or a more complex context-sensitive signal processing in which data from different kinds of sensors are combined

  • Affordable 3D Micro-Electrical-Mechanical System (MEMS), e.g., accelerometers and gyroscopes, have become available for consumer use [3]. These devices are named Inertial Measurement Unit (IMU) and can be differentiated by the number of Degrees of freedom (DoF) offered by the implemented sensors on them: 6 DoF (three-dimensional (3D) gyroscopes associated with 3D accelerometers), and 9 DoF (3D gyroscopes associated with 3D accelerometers and 3D magnetometers)

  • We have found reference values in the literature only for rotational data, as comparisons based on double integration or derivation are less common

Read more

Summary

Introduction

Musical performances with acoustic instruments and voice are often associated with electronic-digital equipment, such as an amplification system or a more complex context-sensitive signal processing in which data from different kinds of sensors are combined. Digital Musical Instruments (DMIs), on the other hand, are fully built up from different types of sensors, at times mimicking traditional acoustic interfaces and physical vibrational sources [1]. These two broad categories are not neatly separated, for what one of the reasons is the fact that they can share a good deal of software and hardware. Affordable 3D Micro-Electrical-Mechanical System (MEMS), e.g., accelerometers and gyroscopes, have become available for consumer use [3] These devices are named Inertial Measurement Unit (IMU) and can be differentiated by the number of Degrees of freedom (DoF) offered by the implemented sensors on them: 6 DoF (three-dimensional (3D) gyroscopes associated with 3D accelerometers), and 9 DoF (3D gyroscopes associated with 3D accelerometers and 3D magnetometers). The development of algorithms for the fusion of these data allows for estimating linear acceleration (isolated from gravity), angular velocities (rotation), and attitude (spatial positioning)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call