Abstract

In nearly all human musical traditions, repetitive patterns of amplitude are employed to create specific rhythms. Interactions between multiple rhythmic signals (instruments) can give rise to a phenomenon known as the “groove” — a pronounced cycle of fluctuating loudness and/or timing, which conveys an enjoyable subjective quality to listeners. Understanding the forces which give rise to and govern the groove is a key focus of centuries-old music theory as well as modern digital signal processing. Breakbeats are useful for analysis of musical grooves due to their fast, syncopated phrases (which typically repeat every one or two bars), the interplay between drums and bass, and the nuanced fluctuations in loudness which result in exciting, danceable rhythms. These aspects have contributed to the popularity of breakbeats as the rhythmic core of many diverse styles of contemporary music. However, the characteristic groove of early jazz and funk breakbeats can be difficult to emulate using modern digital sampling and composition technology. Time series analysis provides a robust set of well-tested methods for modeling the behavior of signals across time, and these methods can be readily applied to signals representing the amplitude of recorded music. In this paper I demonstrate that time series methodology, applied to datasets of the amplitude of musical signals, can be used to quantitatively model and estimate the groove, where “groove” is defined as the variation in relative loudness from note to note in a piece of recorded music. I first present a procedure for resampling recordings of musical instruments in a method conducive to discrete-time signal processing for musical analysis. I then present several theoretical models of recorded music where the amplitude of an instrument at time t is a function of its past values, the exogenous input of other instruments, and stochastic error. Several different models are estimated and compared using criteria such as AIC, BIC, and explanatory variable parsimony. Based on these comparisons, I suggest a seasonal autoregressive integrated moving-average model with explanatory variables (SARIMAX) as an efficient modeling strategy for this type of data. I also estimate a simple application of the model using amplitude datasets derived from recorded audio. I demonstrate how the model can be used to determine certain features of a recorded groove, such as the relative influence of specific divisions of musical time on the drummer’s overall dynamics. Finally, I demonstrate that these data and modeling procedures can generate predicted values of musical amplitudes in a statistically robust way. I close with a discussion of the implications of my results in terms of statistics and music theory, the generalizability of the selected model, and potential applications in fields such as electronic music production and sound design. By quantitatively modeling and predicting the groove produced by human musicians, “humanization” of electronic instruments might be achieved in a more robust manner than the simple randomization procedures employed currently.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.