Abstract

Recent neuroscience research has shown increasing use of multivariate decoding methods and machine learning. These methods, by uncovering the source and nature of informative variance in large data sets, invert the classical direction of inference that attempts to explain brain activity from mental state variables or stimulus features. However, these techniques are not yet commonly used among music researchers. In this position article, we introduce some key features of machine learning methods and review their use in the field of cognitive and behavioral neuroscience of music. We argue for the great potential of these methods in decoding multiple data types, specifically audio waveforms, electroen- cephalography, functional MRI, and motion capture data. By finding the most informative aspects of stimulus and performance data, hypotheses can be generated pertaining to how the brain processes incoming musical information and generates behavioral output, respectively. Importantly, these methods are also applicable to different neural and physiological data types such as magnetoencephalography, near-infrared spectroscopy, positron emission tomography, and electromyography.Keywords: machine learning, classification, music neuroscienceSupplemental materials: http://dx.doi.org/10.1037/a0031014.suppMusic is acoustic information with complex temporal and spa- tial features. Research into perception and cognition of multifac- eted aspects of music aims to decode the information from neural signals elicited by listening to music. Music performance, on the other hand, entails the encoding of musical information to neural commands issued to the muscles. To understand the neural pro- cesses underlying music perception, cognition, and performance, therefore, researchers face issues of extracting meaningful infor- mation from extremely large data sets with regard to neural, physiological, and biomechanical signals. This is nontrivial in light of recent technological advances in data collection, which can lead to a potentially overwhelming amount of data. The supervised and unsupervised methods of machine learning are powerful tools for uncovering unseen patterns in these large data sets. In this way, not only can the means of specified conditions be compared, but data-driven methods are used to uncover sources of informative variance in the signals. Moreover, machine learning allows for quantitative evaluation of individual differences in music percep- tion and performance.In this article, we introduce key features of machine learning and highlight some examples of their use on a range of data types. After reviewing basic concepts and terminology, we discuss di- mensionality reduction and the impact of the choice of algorithm. We then turn to data types we judge to be most relevant to the neural processing of music. For an audio waveform, it is possible to elucidate the most perceptually informative part of the signal, by determining which aspects of the signal are most salient or useful to the brain in determining specific characteristics of the sound. In the same way, it is possible to uncover neural representations of musical attributes such as rhythm and harmony in a data-driven way by applying supervised and unsupervised learning to single- trial electroencephalography (EEG) or functional MRI (fMRI) data. Finally, machine learning methods are also useful in behav- ioral research, allowing characterization of fundamental patterns of movements that use a large number of joints and muscles during musical performance.MethodsAlthough we do not aim to give a complete overview of these methods here (for a detailed review of machine learning methods for brain imaging, see Lemm, Blankertz, Dickhaus, & Muller, 2011), we introduce a number of key aspects of machine learning as they pertain to music cognition research.Basic TerminologyMachine learning (or statistical learning) involves uncovering meaningful patterns in collections of observations, often with the goal of classifying, or categorizing, the observations in some way. …

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call